Jan 26 17:43:50 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 17:43:50 crc restorecon[4755]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:50 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 17:43:51 crc restorecon[4755]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 17:43:51 crc kubenswrapper[4787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 17:43:51 crc kubenswrapper[4787]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 17:43:51 crc kubenswrapper[4787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 17:43:51 crc kubenswrapper[4787]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 17:43:51 crc kubenswrapper[4787]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 17:43:51 crc kubenswrapper[4787]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.436496 4787 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440392 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440414 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440419 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440424 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440429 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440435 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440440 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440444 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440447 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440451 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440454 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440458 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440462 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440467 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440472 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440476 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440479 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440483 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440489 4787 feature_gate.go:330] unrecognized feature gate: Example Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440493 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440497 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440501 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440505 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440510 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440514 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440518 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440521 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440525 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440529 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440533 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440536 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440540 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440544 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440547 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440551 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440555 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440560 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440564 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440569 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440574 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440579 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440585 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440590 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440596 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440602 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440606 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440611 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440615 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440619 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440622 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440626 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440630 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440634 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440637 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440641 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440645 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440648 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440652 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440657 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440662 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440666 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440670 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440673 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440677 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440680 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440684 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440687 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440690 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440696 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440700 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.440705 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440796 4787 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440806 4787 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440816 4787 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440823 4787 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440829 4787 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440835 4787 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440842 4787 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440855 4787 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440860 4787 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440864 4787 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440869 4787 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440873 4787 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440877 4787 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440882 4787 flags.go:64] FLAG: --cgroup-root="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440885 4787 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440890 4787 flags.go:64] FLAG: --client-ca-file="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440894 4787 flags.go:64] FLAG: --cloud-config="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440898 4787 flags.go:64] FLAG: --cloud-provider="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440902 4787 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440909 4787 flags.go:64] FLAG: --cluster-domain="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440914 4787 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440918 4787 flags.go:64] FLAG: --config-dir="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440922 4787 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440927 4787 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440933 4787 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440939 4787 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440957 4787 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440962 4787 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440967 4787 flags.go:64] FLAG: --contention-profiling="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440971 4787 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440975 4787 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440980 4787 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440984 4787 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440994 4787 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.440998 4787 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441002 4787 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441006 4787 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441011 4787 flags.go:64] FLAG: --enable-server="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441015 4787 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441046 4787 flags.go:64] FLAG: --event-burst="100" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441051 4787 flags.go:64] FLAG: --event-qps="50" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441056 4787 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441061 4787 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441068 4787 flags.go:64] FLAG: --eviction-hard="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441074 4787 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441078 4787 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441083 4787 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441087 4787 flags.go:64] FLAG: --eviction-soft="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441091 4787 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441095 4787 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441099 4787 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441104 4787 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441108 4787 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441112 4787 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441116 4787 flags.go:64] FLAG: --feature-gates="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441122 4787 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441126 4787 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441131 4787 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441135 4787 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441140 4787 flags.go:64] FLAG: --healthz-port="10248" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441144 4787 flags.go:64] FLAG: --help="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441148 4787 flags.go:64] FLAG: --hostname-override="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441152 4787 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441157 4787 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441161 4787 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441165 4787 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441170 4787 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441175 4787 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441180 4787 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441186 4787 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441190 4787 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441196 4787 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441201 4787 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441206 4787 flags.go:64] FLAG: --kube-reserved="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441211 4787 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441215 4787 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441219 4787 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441224 4787 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441228 4787 flags.go:64] FLAG: --lock-file="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441233 4787 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441237 4787 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441241 4787 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441248 4787 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441253 4787 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441257 4787 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441261 4787 flags.go:64] FLAG: --logging-format="text" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441265 4787 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441270 4787 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441274 4787 flags.go:64] FLAG: --manifest-url="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441279 4787 flags.go:64] FLAG: --manifest-url-header="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441284 4787 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441289 4787 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441294 4787 flags.go:64] FLAG: --max-pods="110" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441299 4787 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441303 4787 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441308 4787 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441312 4787 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441316 4787 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441320 4787 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441324 4787 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441335 4787 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441339 4787 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441343 4787 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441347 4787 flags.go:64] FLAG: --pod-cidr="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441351 4787 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441359 4787 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441363 4787 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441367 4787 flags.go:64] FLAG: --pods-per-core="0" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441371 4787 flags.go:64] FLAG: --port="10250" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441376 4787 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441382 4787 flags.go:64] FLAG: --provider-id="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441391 4787 flags.go:64] FLAG: --qos-reserved="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441400 4787 flags.go:64] FLAG: --read-only-port="10255" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441410 4787 flags.go:64] FLAG: --register-node="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441420 4787 flags.go:64] FLAG: --register-schedulable="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441431 4787 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441453 4787 flags.go:64] FLAG: --registry-burst="10" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441463 4787 flags.go:64] FLAG: --registry-qps="5" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441473 4787 flags.go:64] FLAG: --reserved-cpus="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441482 4787 flags.go:64] FLAG: --reserved-memory="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441493 4787 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441503 4787 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441513 4787 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441528 4787 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441537 4787 flags.go:64] FLAG: --runonce="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441546 4787 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441555 4787 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441564 4787 flags.go:64] FLAG: --seccomp-default="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441572 4787 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441582 4787 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441592 4787 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441601 4787 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441610 4787 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441620 4787 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441629 4787 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441638 4787 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441647 4787 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441656 4787 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441665 4787 flags.go:64] FLAG: --system-cgroups="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441674 4787 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441687 4787 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441696 4787 flags.go:64] FLAG: --tls-cert-file="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441705 4787 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441722 4787 flags.go:64] FLAG: --tls-min-version="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441731 4787 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441740 4787 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441749 4787 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441762 4787 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441771 4787 flags.go:64] FLAG: --v="2" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441782 4787 flags.go:64] FLAG: --version="false" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441794 4787 flags.go:64] FLAG: --vmodule="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441806 4787 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.441816 4787 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442056 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442068 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442080 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442092 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442101 4787 feature_gate.go:330] unrecognized feature gate: Example Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442109 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442117 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442126 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442134 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442144 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442152 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442160 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442167 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442176 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442184 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442191 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442199 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442207 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442215 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442223 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442231 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442238 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442246 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442254 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442261 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442269 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442279 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442287 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442295 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442303 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442311 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442318 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442326 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442333 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442348 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442356 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442364 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442372 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442380 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442389 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442397 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442406 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442414 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442422 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442431 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442438 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442446 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442454 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442462 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442470 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442479 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442487 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442495 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442502 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442510 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442520 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442530 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442538 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442549 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442556 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442565 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442572 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442580 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442587 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442597 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442607 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442618 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442628 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442637 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442646 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.442656 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.442843 4787 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.454288 4787 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.454328 4787 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454412 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454420 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454426 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454433 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454437 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454441 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454445 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454448 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454452 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454456 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454459 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454463 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454466 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454470 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454473 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454477 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454480 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454484 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454488 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454491 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454495 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454499 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454502 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454506 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454510 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454517 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454521 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454526 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454529 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454533 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454536 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454541 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454545 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454549 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454554 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454559 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454562 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454566 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454570 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454574 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454578 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454582 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454587 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454591 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454595 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454599 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454603 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454607 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454611 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454614 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454618 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454621 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454625 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454628 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454632 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454635 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454639 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454642 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454645 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454649 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454652 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454657 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454661 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454666 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454670 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454674 4787 feature_gate.go:330] unrecognized feature gate: Example Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454678 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454682 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454685 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454689 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454694 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.454701 4787 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454810 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454817 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454821 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454826 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454829 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454833 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454836 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454840 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454844 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454850 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454854 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454858 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454861 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454865 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454868 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454873 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454877 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454881 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454885 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454890 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454894 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454900 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454904 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454908 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454912 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454916 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454920 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454923 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454927 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454930 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454934 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454937 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454941 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454957 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454962 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454967 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454971 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454975 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454979 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454983 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454987 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454991 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454995 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.454999 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455002 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455006 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455010 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455013 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455017 4787 feature_gate.go:330] unrecognized feature gate: Example Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455020 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455024 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455027 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455031 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455035 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455039 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455042 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455046 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455051 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455054 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455058 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455061 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455065 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455068 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455072 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455075 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455079 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455082 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455086 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455089 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455093 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.455096 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.455102 4787 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.455283 4787 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.457539 4787 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.457623 4787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.458410 4787 server.go:997] "Starting client certificate rotation" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.458430 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.458970 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 13:05:05.859690981 +0000 UTC Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.459034 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.463318 4787 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.465187 4787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.465564 4787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.476312 4787 log.go:25] "Validated CRI v1 runtime API" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.492701 4787 log.go:25] "Validated CRI v1 image API" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.493939 4787 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.496471 4787 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-17-39-10-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.496530 4787 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.513007 4787 manager.go:217] Machine: {Timestamp:2026-01-26 17:43:51.511323971 +0000 UTC m=+0.218460124 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:cb316aee-f977-43a3-b6ab-af2db7230a5b BootID:d92906ba-5f63-4676-93ca-b9fd3c104d01 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2e:5b:30 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2e:5b:30 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:03:29:6a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:36:7e:d8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a2:41:f3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:95:5a:5a Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:88:69:21 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:c1:4f:58:e7:73 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:1d:0f:c8:2e:5d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.513376 4787 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.513656 4787 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.514271 4787 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.514503 4787 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.514552 4787 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.515009 4787 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.515038 4787 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.515266 4787 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.515303 4787 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.515679 4787 state_mem.go:36] "Initialized new in-memory state store" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.515787 4787 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.516836 4787 kubelet.go:418] "Attempting to sync node with API server" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.516874 4787 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.516904 4787 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.516926 4787 kubelet.go:324] "Adding apiserver pod source" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.516943 4787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.518861 4787 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.518996 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.519020 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.519078 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.519114 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.519327 4787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.520413 4787 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521044 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521074 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521089 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521100 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521123 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521136 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521147 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521166 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521181 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521194 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521211 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521222 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.521521 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.522144 4787 server.go:1280] "Started kubelet" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.522627 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.522805 4787 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.522784 4787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.523549 4787 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 17:43:51 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.528370 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.528729 4787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.528773 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:48:22.075105337 +0000 UTC Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.529328 4787 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.529352 4787 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.529517 4787 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.530365 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.530467 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.530685 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.530915 4787 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.530982 4787 factory.go:55] Registering systemd factory Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.530996 4787 factory.go:221] Registration of the systemd container factory successfully Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.532163 4787 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.530123 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e58e20de4232a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 17:43:51.522091818 +0000 UTC m=+0.229227961,LastTimestamp:2026-01-26 17:43:51.522091818 +0000 UTC m=+0.229227961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.533751 4787 server.go:460] "Adding debug handlers to kubelet server" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.534019 4787 factory.go:153] Registering CRI-O factory Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.534060 4787 factory.go:221] Registration of the crio container factory successfully Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.534124 4787 factory.go:103] Registering Raw factory Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.534158 4787 manager.go:1196] Started watching for new ooms in manager Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.535417 4787 manager.go:319] Starting recovery of all containers Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540439 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540506 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540526 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540552 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540565 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540576 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540588 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540600 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540682 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540698 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540710 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540723 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540735 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540756 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540769 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540779 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540790 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540841 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540855 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540868 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540879 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540892 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540905 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540920 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540932 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540964 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.540989 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541003 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541017 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541029 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541041 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541054 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541066 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541079 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541091 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541104 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541117 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541129 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541141 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541154 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541168 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541181 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541193 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541206 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541253 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541267 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541281 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541293 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541307 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541321 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541335 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541346 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541363 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541375 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541388 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541400 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541413 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541425 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541436 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541447 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541458 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541469 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541479 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541490 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541502 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541514 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541525 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541537 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541550 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541560 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541570 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541582 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541595 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541605 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541617 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541628 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541641 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541652 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541707 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541721 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.541735 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544737 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544781 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544796 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544809 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544825 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544839 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544856 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544886 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544905 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544921 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544937 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544973 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.544989 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545015 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545030 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545044 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545057 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545069 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545081 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545094 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545106 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545121 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545133 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545153 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545167 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545181 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545197 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545212 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545226 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545242 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545256 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545271 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545285 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545299 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545342 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545356 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545368 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545380 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545394 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545407 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545420 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545432 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545445 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545457 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545470 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545486 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545507 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545521 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545533 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545546 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545560 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545572 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545585 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545598 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545611 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545624 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545637 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545651 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545664 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.545678 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547176 4787 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547225 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547250 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547267 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547283 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547298 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547314 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547332 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547349 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547365 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547382 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547401 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547416 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547429 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547444 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547460 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547475 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547492 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547508 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547523 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547539 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547554 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547573 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547624 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547642 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547659 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547675 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547693 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547708 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547725 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547742 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547757 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547772 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547815 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547832 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547847 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547863 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547878 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547894 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547910 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547925 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547941 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547979 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.547996 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548011 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548027 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548042 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548058 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548076 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548094 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548111 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548126 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548144 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548159 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548177 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548193 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548209 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548225 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548240 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548257 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548272 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548287 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548303 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548320 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548337 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548354 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548371 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548386 4787 reconstruct.go:97] "Volume reconstruction finished" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.548398 4787 reconciler.go:26] "Reconciler: start to sync state" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.568294 4787 manager.go:324] Recovery completed Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.584000 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.586153 4787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.586814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.586871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.586886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.587910 4787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.587973 4787 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.588004 4787 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.588007 4787 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.588035 4787 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.588069 4787 state_mem.go:36] "Initialized new in-memory state store" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.588071 4787 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 17:43:51 crc kubenswrapper[4787]: W0126 17:43:51.589199 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.589270 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.599361 4787 policy_none.go:49] "None policy: Start" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.600613 4787 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.600663 4787 state_mem.go:35] "Initializing new in-memory state store" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.632412 4787 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.663154 4787 manager.go:334] "Starting Device Plugin manager" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.663221 4787 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.663234 4787 server.go:79] "Starting device plugin registration server" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.663698 4787 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.663714 4787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.663960 4787 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.664045 4787 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.664053 4787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.673897 4787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.689170 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.689285 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.690637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.690678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.690691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.690811 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.691194 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.691260 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.691975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692132 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692462 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692532 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.692867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693022 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693147 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693194 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.693982 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694117 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694374 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694408 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.694804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.695980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.696035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.696059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.696086 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.696182 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.700874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.700926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.700939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.731978 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.751903 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752017 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752045 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752069 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752096 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752120 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752141 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752239 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752254 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752288 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752319 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752352 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.752372 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.764905 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.766009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.766044 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.766056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.766082 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.766482 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853607 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853633 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853659 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853679 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853698 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853763 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853805 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853829 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853882 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853852 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853964 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853944 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853905 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853995 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854033 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853841 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854029 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.853901 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854087 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854173 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854236 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.854401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.968313 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.970024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.970101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.970114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:51 crc kubenswrapper[4787]: I0126 17:43:51.970150 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 17:43:51 crc kubenswrapper[4787]: E0126 17:43:51.970859 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.017240 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.038251 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.046325 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0aa7155470b43d678e7be2c8d00bc86d01be08f7f1bfb352d2262727afb03ea7 WatchSource:0}: Error finding container 0aa7155470b43d678e7be2c8d00bc86d01be08f7f1bfb352d2262727afb03ea7: Status 404 returned error can't find the container with id 0aa7155470b43d678e7be2c8d00bc86d01be08f7f1bfb352d2262727afb03ea7 Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.057738 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.064547 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.068968 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.074485 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bc30c552eea2a02e3c8669baef2135f89b0cdc8c1b393eb50c1494afe49cde13 WatchSource:0}: Error finding container bc30c552eea2a02e3c8669baef2135f89b0cdc8c1b393eb50c1494afe49cde13: Status 404 returned error can't find the container with id bc30c552eea2a02e3c8669baef2135f89b0cdc8c1b393eb50c1494afe49cde13 Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.133313 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.371877 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.374118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.374187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.374209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.374244 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.374886 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.515847 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bcde8f159515543f1af5db09fd965d1499496679ae1e054e161160105f9ab2ff WatchSource:0}: Error finding container bcde8f159515543f1af5db09fd965d1499496679ae1e054e161160105f9ab2ff: Status 404 returned error can't find the container with id bcde8f159515543f1af5db09fd965d1499496679ae1e054e161160105f9ab2ff Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.521045 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-448cf563b7b8b109245f8c5fbaed88a335ac97845ce7c34410a6845e4dd00c4d WatchSource:0}: Error finding container 448cf563b7b8b109245f8c5fbaed88a335ac97845ce7c34410a6845e4dd00c4d: Status 404 returned error can't find the container with id 448cf563b7b8b109245f8c5fbaed88a335ac97845ce7c34410a6845e4dd00c4d Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.522208 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a4dbb5f8fabe7b9d5b9fdbb58c9d26a417775a4f3b260777f9a1c5c3fb994ea9 WatchSource:0}: Error finding container a4dbb5f8fabe7b9d5b9fdbb58c9d26a417775a4f3b260777f9a1c5c3fb994ea9: Status 404 returned error can't find the container with id a4dbb5f8fabe7b9d5b9fdbb58c9d26a417775a4f3b260777f9a1c5c3fb994ea9 Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.523168 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.528956 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:30:08.92740567 +0000 UTC Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.581091 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.581198 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.599179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0aa7155470b43d678e7be2c8d00bc86d01be08f7f1bfb352d2262727afb03ea7"} Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.600511 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a4dbb5f8fabe7b9d5b9fdbb58c9d26a417775a4f3b260777f9a1c5c3fb994ea9"} Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.601892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"448cf563b7b8b109245f8c5fbaed88a335ac97845ce7c34410a6845e4dd00c4d"} Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.603389 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bcde8f159515543f1af5db09fd965d1499496679ae1e054e161160105f9ab2ff"} Jan 26 17:43:52 crc kubenswrapper[4787]: I0126 17:43:52.604411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bc30c552eea2a02e3c8669baef2135f89b0cdc8c1b393eb50c1494afe49cde13"} Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.815586 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.815683 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.861291 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.861385 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:52 crc kubenswrapper[4787]: W0126 17:43:52.907585 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.907678 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:52 crc kubenswrapper[4787]: E0126 17:43:52.936140 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.175792 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.177657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.177715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.177728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.177761 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 17:43:53 crc kubenswrapper[4787]: E0126 17:43:53.178657 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.523999 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.529104 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:19:03.345649295 +0000 UTC Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.608492 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645"} Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.608552 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed"} Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.610159 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40" exitCode=0 Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.610247 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40"} Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.610398 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.611651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.611688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.611702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.613288 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.613981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.614004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.614015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.615321 4787 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5" exitCode=0 Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.615418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5"} Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.615597 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.617709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.617759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.617772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.620322 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9c8666c5f98eb27e87b5d6f78ba68c019c8d97e0bd20b26184edb0373634c3a4" exitCode=0 Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.620395 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9c8666c5f98eb27e87b5d6f78ba68c019c8d97e0bd20b26184edb0373634c3a4"} Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.620495 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.622657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.622682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.622696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.625199 4787 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b" exitCode=0 Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.625242 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b"} Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.625337 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.626397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.626421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.626429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:53 crc kubenswrapper[4787]: I0126 17:43:53.645032 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 17:43:53 crc kubenswrapper[4787]: E0126 17:43:53.646360 4787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.529923 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:51:43.912517791 +0000 UTC Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.638992 4787 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4" exitCode=0 Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.639087 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.639166 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.640159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.640214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.640230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.642342 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fbc8d9414a1bf5171bcce7f3ad7c9e36065fd777643fe23c5f580e050ea0da22"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.642376 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.643437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.643475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.643489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.646624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.646655 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.646655 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.646673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.649898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.649994 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.650010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.652544 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.652532 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.652658 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.653577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.653621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.653645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.663670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.663728 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.663745 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.663757 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784"} Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.682464 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.779231 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.781060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.781100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.781111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:54 crc kubenswrapper[4787]: I0126 17:43:54.781137 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.530651 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:57:28.936309467 +0000 UTC Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.670366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53"} Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.670466 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.671530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.671584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.671598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674324 4787 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976" exitCode=0 Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976"} Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674489 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674532 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674489 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674729 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.674835 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675589 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.675587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.676151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.676165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.676937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.676974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:55 crc kubenswrapper[4787]: I0126 17:43:55.677002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.532436 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:00:24.722218354 +0000 UTC Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.685839 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165"} Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.685903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc"} Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.685917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639"} Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.685961 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40"} Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.685983 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.686008 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.686093 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.685974 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed"} Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.686660 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687665 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.687869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.772095 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.772334 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.773554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.773586 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:56 crc kubenswrapper[4787]: I0126 17:43:56.773595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.532585 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:00:06.805894788 +0000 UTC Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.688324 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.688387 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.689567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.689610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.689624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.689680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.689698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.689709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:57 crc kubenswrapper[4787]: I0126 17:43:57.741484 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 17:43:58 crc kubenswrapper[4787]: I0126 17:43:58.533793 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:43:52.366252097 +0000 UTC Jan 26 17:43:58 crc kubenswrapper[4787]: I0126 17:43:58.708833 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 17:43:58 crc kubenswrapper[4787]: I0126 17:43:58.709124 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:58 crc kubenswrapper[4787]: I0126 17:43:58.710584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:58 crc kubenswrapper[4787]: I0126 17:43:58.710640 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:58 crc kubenswrapper[4787]: I0126 17:43:58.710727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:43:59 crc kubenswrapper[4787]: I0126 17:43:59.534493 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:29:19.699256435 +0000 UTC Jan 26 17:43:59 crc kubenswrapper[4787]: I0126 17:43:59.852483 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:43:59 crc kubenswrapper[4787]: I0126 17:43:59.852792 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:43:59 crc kubenswrapper[4787]: I0126 17:43:59.854492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:43:59 crc kubenswrapper[4787]: I0126 17:43:59.854618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:43:59 crc kubenswrapper[4787]: I0126 17:43:59.854645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.405164 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.405401 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.444371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.444421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.444434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.535339 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:18:53.580476493 +0000 UTC Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.607057 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.697660 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.698701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.698758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:00 crc kubenswrapper[4787]: I0126 17:44:00.698772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:01 crc kubenswrapper[4787]: I0126 17:44:01.536486 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:27:11.143432936 +0000 UTC Jan 26 17:44:01 crc kubenswrapper[4787]: E0126 17:44:01.674065 4787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 17:44:01 crc kubenswrapper[4787]: I0126 17:44:01.809731 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:01 crc kubenswrapper[4787]: I0126 17:44:01.810008 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:01 crc kubenswrapper[4787]: I0126 17:44:01.811801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:01 crc kubenswrapper[4787]: I0126 17:44:01.811865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:01 crc kubenswrapper[4787]: I0126 17:44:01.811889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:02 crc kubenswrapper[4787]: I0126 17:44:02.537470 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:58:07.295331956 +0000 UTC Jan 26 17:44:02 crc kubenswrapper[4787]: I0126 17:44:02.909045 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 17:44:02 crc kubenswrapper[4787]: I0126 17:44:02.909221 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:02 crc kubenswrapper[4787]: I0126 17:44:02.913398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:02 crc kubenswrapper[4787]: I0126 17:44:02.913446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:02 crc kubenswrapper[4787]: I0126 17:44:02.913457 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.032896 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.033100 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.034513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.034564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.034576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.043593 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.405642 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.405731 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.538254 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:10:22.589683782 +0000 UTC Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.709530 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.710879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.710941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.710980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:03 crc kubenswrapper[4787]: I0126 17:44:03.715994 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.524043 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.538629 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:25:27.515649031 +0000 UTC Jan 26 17:44:04 crc kubenswrapper[4787]: E0126 17:44:04.538730 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.711853 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.713382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.713473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.713490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:04 crc kubenswrapper[4787]: W0126 17:44:04.722214 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.722293 4787 trace.go:236] Trace[1038054758]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 17:43:54.721) (total time: 10001ms): Jan 26 17:44:04 crc kubenswrapper[4787]: Trace[1038054758]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:44:04.722) Jan 26 17:44:04 crc kubenswrapper[4787]: Trace[1038054758]: [10.001251622s] [10.001251622s] END Jan 26 17:44:04 crc kubenswrapper[4787]: E0126 17:44:04.722316 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.755906 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.755982 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.858038 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]log ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]etcd ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-apiextensions-informers ok Jan 26 17:44:04 crc kubenswrapper[4787]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 26 17:44:04 crc kubenswrapper[4787]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 17:44:04 crc kubenswrapper[4787]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 17:44:04 crc kubenswrapper[4787]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/bootstrap-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 17:44:04 crc kubenswrapper[4787]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]autoregister-completion ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 17:44:04 crc kubenswrapper[4787]: livez check failed Jan 26 17:44:04 crc kubenswrapper[4787]: I0126 17:44:04.858112 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:44:05 crc kubenswrapper[4787]: I0126 17:44:05.539064 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:01:22.285320745 +0000 UTC Jan 26 17:44:06 crc kubenswrapper[4787]: I0126 17:44:06.539219 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:31:51.564845568 +0000 UTC Jan 26 17:44:07 crc kubenswrapper[4787]: I0126 17:44:07.539703 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:04:18.816482384 +0000 UTC Jan 26 17:44:08 crc kubenswrapper[4787]: I0126 17:44:08.539912 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:15:28.261516186 +0000 UTC Jan 26 17:44:08 crc kubenswrapper[4787]: I0126 17:44:08.817030 4787 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.540334 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:57:09.449389581 +0000 UTC Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.756732 4787 trace.go:236] Trace[80435616]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 17:43:55.365) (total time: 14390ms): Jan 26 17:44:09 crc kubenswrapper[4787]: Trace[80435616]: ---"Objects listed" error: 14390ms (17:44:09.756) Jan 26 17:44:09 crc kubenswrapper[4787]: Trace[80435616]: [14.390808359s] [14.390808359s] END Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.756812 4787 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 17:44:09 crc kubenswrapper[4787]: E0126 17:44:09.758492 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.758531 4787 trace.go:236] Trace[341368284]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 17:43:55.074) (total time: 14684ms): Jan 26 17:44:09 crc kubenswrapper[4787]: Trace[341368284]: ---"Objects listed" error: 14684ms (17:44:09.758) Jan 26 17:44:09 crc kubenswrapper[4787]: Trace[341368284]: [14.684356372s] [14.684356372s] END Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.758569 4787 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.759311 4787 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.759536 4787 trace.go:236] Trace[830493587]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 17:43:55.086) (total time: 14672ms): Jan 26 17:44:09 crc kubenswrapper[4787]: Trace[830493587]: ---"Objects listed" error: 14672ms (17:44:09.759) Jan 26 17:44:09 crc kubenswrapper[4787]: Trace[830493587]: [14.672687695s] [14.672687695s] END Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.759553 4787 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.762725 4787 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.856543 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:44:09 crc kubenswrapper[4787]: I0126 17:44:09.860975 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.163147 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.527222 4787 apiserver.go:52] "Watching apiserver" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.529508 4787 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.529838 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.530190 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.530433 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.530648 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.530577 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.530555 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.530795 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.530848 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.530896 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.531138 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.533246 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.533708 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.533904 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.534130 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.534240 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.534260 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.535318 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.535912 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.536987 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.540603 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:32:41.705539263 +0000 UTC Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.555712 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.572358 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.584906 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.596725 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.616777 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.626743 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.630347 4787 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.641299 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.656265 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.656741 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.660860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665001 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665400 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665524 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665635 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665731 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665845 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.665975 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666085 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666182 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666274 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666479 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666578 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666717 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666854 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666435 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666446 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666442 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666570 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666604 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666690 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666719 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666770 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666935 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667070 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.666981 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667207 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667241 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667269 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667293 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667298 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667317 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667454 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667531 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667600 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667654 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667702 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667862 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667921 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667852 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667934 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667938 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668014 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668070 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668121 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668172 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668184 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668220 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668270 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668287 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668321 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668375 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668422 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668443 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668466 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668521 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668569 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668618 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668668 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668720 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668775 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668824 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668871 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668920 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669004 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669056 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669106 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669152 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668470 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668503 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668538 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668537 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668590 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668684 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668790 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669300 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668310 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669205 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668917 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.668977 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669064 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669418 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669456 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669492 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669523 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669568 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669627 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669657 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669688 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669720 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669784 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669819 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669849 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669881 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669914 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669974 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670006 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670040 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670073 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670105 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670133 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670190 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670219 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670455 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670478 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670499 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670536 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670566 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670595 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670704 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670737 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670771 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670800 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670833 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670916 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670975 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671010 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671046 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671078 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671103 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671152 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671187 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671222 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671258 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671289 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671323 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671353 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671383 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671443 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671541 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671569 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671598 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671637 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671666 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671694 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671724 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671782 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671808 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671937 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672222 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672254 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672284 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672322 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672353 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672384 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672441 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672470 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672499 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672561 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672593 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672625 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672683 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672746 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672777 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672810 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672841 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672870 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672901 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672969 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673001 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673068 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673098 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673130 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673165 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673194 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673226 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673259 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673290 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673361 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673392 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673426 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673458 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673491 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673527 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673561 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673596 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669075 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.667735 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669526 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.669624 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670005 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670101 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670726 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.670900 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.671796 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672706 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.672743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673002 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673262 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673400 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673502 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.673629 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:44:11.17361041 +0000 UTC m=+19.880746543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675437 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675492 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675572 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675601 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675612 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675653 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675704 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675748 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675788 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675825 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675865 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675901 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675935 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676022 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676063 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676100 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676135 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676171 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676207 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676160 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676242 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676273 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676345 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676343 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676411 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676438 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676461 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676486 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676509 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676584 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676613 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676634 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676662 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676674 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676687 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673876 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.674047 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.674078 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.674379 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.674547 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.674541 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.674944 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675203 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.675395 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.677291 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.677345 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678031 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678341 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678502 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678517 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678522 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678587 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.678850 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679158 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679185 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679255 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679600 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679629 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680033 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.679944 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676710 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680068 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680116 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680136 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680145 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680192 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680278 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680317 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680397 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680431 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680508 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680547 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680586 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680621 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680814 4787 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680839 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680856 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680872 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680886 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680899 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680915 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680930 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680944 4787 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.680992 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681010 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681028 4787 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681045 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681066 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681087 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681108 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681127 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681145 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681153 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681165 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681170 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681186 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681204 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682080 4787 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682110 4787 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682132 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682150 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682165 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682179 4787 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682194 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682210 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682226 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682241 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682254 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682268 4787 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682282 4787 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682296 4787 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682309 4787 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682323 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682338 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682352 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682367 4787 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682381 4787 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682396 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682409 4787 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682422 4787 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682435 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682450 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682466 4787 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682479 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682492 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682511 4787 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682527 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682541 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682556 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682571 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682587 4787 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682601 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682655 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682671 4787 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682685 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682700 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682716 4787 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682731 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682745 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682759 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682772 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682785 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682800 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682813 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682827 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.676655 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682842 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683121 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681425 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681787 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681966 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681995 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.681691 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682192 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.682387 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683594 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683665 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683695 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.683810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.684394 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.684422 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.684457 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.684910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.685781 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.686123 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.686190 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.686306 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.686380 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:11.18635758 +0000 UTC m=+19.893493753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.687187 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.687288 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.687336 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.687842 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.688123 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.688712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.688832 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.688881 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.688966 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689284 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689287 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689403 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689035 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689664 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689771 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.689986 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.690257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.690267 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.690566 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.690770 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.690936 4787 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.691061 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:11.191024376 +0000 UTC m=+19.898160730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.691735 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.692034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.693138 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.693916 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.673666 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.694752 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.694891 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.695139 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.695271 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.695379 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.695657 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.695910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.696358 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.697141 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.697408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.699283 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.700088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.702248 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.703005 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.708874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.710432 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.710483 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.710474 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.710502 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.710727 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:11.21067261 +0000 UTC m=+19.917808943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.711068 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.711094 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.711106 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.711197 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:11.211168073 +0000 UTC m=+19.918304386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.711455 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.711668 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.711882 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.712097 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.712348 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.715249 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.718640 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.718981 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.719288 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.720211 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.720383 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.720585 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.720898 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.721090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.721002 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.721435 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.721737 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722099 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722240 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722339 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722374 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722469 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722989 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.722433 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.723224 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.723513 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.723698 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.724587 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.725367 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.725574 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.725729 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.725807 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.726155 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.728176 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.732232 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.732655 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.733782 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.734399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.734416 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.734689 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.735027 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.736204 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.736916 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.739652 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.739650 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.741034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: E0126 17:44:10.745739 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.745941 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.746650 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.747412 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.756548 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.763497 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.768872 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.772633 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.783395 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784048 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784220 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784366 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784367 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784389 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784492 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784503 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784515 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784524 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784534 4787 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784545 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784556 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784557 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784569 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784583 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784595 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784607 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784619 4787 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784631 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784643 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784656 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784668 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784680 4787 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784691 4787 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784703 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784714 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784725 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784736 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784749 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784760 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784773 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784784 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784795 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784806 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784818 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784829 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784839 4787 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784850 4787 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784860 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784871 4787 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784881 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784893 4787 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784904 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784915 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784927 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784938 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784972 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784985 4787 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.784996 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785007 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785018 4787 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785030 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785041 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785054 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785065 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785076 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785087 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785100 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785111 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785122 4787 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785134 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785145 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785156 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785167 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785177 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785187 4787 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785200 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785210 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785220 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785231 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785242 4787 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785252 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785262 4787 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785274 4787 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785284 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785294 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785306 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785317 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785329 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785340 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785351 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785360 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785370 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785380 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785389 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785400 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785411 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785420 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785430 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785440 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785449 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785459 4787 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785470 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785481 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785492 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785503 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785514 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785524 4787 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785534 4787 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785544 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785554 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785564 4787 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785574 4787 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785585 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785596 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785606 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785615 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785624 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785634 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785644 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785660 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785669 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785678 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785688 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785697 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785706 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785715 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785724 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785734 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785743 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785753 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785764 4787 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785775 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785786 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785797 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.785808 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.789961 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.797253 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.805349 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.816099 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.826768 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.838520 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.849427 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.862139 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.862695 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 17:44:10 crc kubenswrapper[4787]: W0126 17:44:10.868554 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c553619e4eebbb98847358e17f43d5fd83f4ce16ef2bdba057842b34ee4ed3b2 WatchSource:0}: Error finding container c553619e4eebbb98847358e17f43d5fd83f4ce16ef2bdba057842b34ee4ed3b2: Status 404 returned error can't find the container with id c553619e4eebbb98847358e17f43d5fd83f4ce16ef2bdba057842b34ee4ed3b2 Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.869619 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.877457 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:10 crc kubenswrapper[4787]: W0126 17:44:10.878431 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-167ece96b9d39cecc5958e3696c9c2e18bbf941b1a1091a7d99d62608fe3351d WatchSource:0}: Error finding container 167ece96b9d39cecc5958e3696c9c2e18bbf941b1a1091a7d99d62608fe3351d: Status 404 returned error can't find the container with id 167ece96b9d39cecc5958e3696c9c2e18bbf941b1a1091a7d99d62608fe3351d Jan 26 17:44:10 crc kubenswrapper[4787]: W0126 17:44:10.884102 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-31aa3b0d4e294428f7b07c4ca0116c76d324a58c2d2e3e4ab4a6eb2f85950abf WatchSource:0}: Error finding container 31aa3b0d4e294428f7b07c4ca0116c76d324a58c2d2e3e4ab4a6eb2f85950abf: Status 404 returned error can't find the container with id 31aa3b0d4e294428f7b07c4ca0116c76d324a58c2d2e3e4ab4a6eb2f85950abf Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.886271 4787 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:10 crc kubenswrapper[4787]: I0126 17:44:10.889048 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.188463 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.188532 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.188691 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.188745 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:12.188732025 +0000 UTC m=+20.895868158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.188808 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:44:12.188788767 +0000 UTC m=+20.895924900 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.289151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.289207 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.289233 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289345 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289357 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289395 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289401 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289479 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:12.289459435 +0000 UTC m=+20.996595568 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289488 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289367 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289530 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289538 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:12.289523526 +0000 UTC m=+20.996659659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:11 crc kubenswrapper[4787]: E0126 17:44:11.289587 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:12.289569748 +0000 UTC m=+20.996705881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.335416 4787 csr.go:261] certificate signing request csr-8jv45 is approved, waiting to be issued Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.350256 4787 csr.go:257] certificate signing request csr-8jv45 is issued Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.459149 4787 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459424 4787 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459459 4787 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459510 4787 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459539 4787 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459544 4787 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459566 4787 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459592 4787 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459596 4787 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: W0126 17:44:11.459597 4787 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.541578 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:05:24.134548491 +0000 UTC Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.592881 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.593456 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.594278 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.594888 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.595527 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.596125 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.596824 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.597461 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.600094 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.600998 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.601565 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.602696 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.603188 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.604080 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.604574 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.605782 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.606570 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.607057 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.611453 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.612363 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.612941 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.614432 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.615001 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.616309 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.616819 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.618291 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.619247 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.619848 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.620391 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.621273 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.622414 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.623025 4787 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.623148 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.625411 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.626761 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.627290 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.629336 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.630563 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.631167 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.632318 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.634732 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.635370 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.636565 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.637213 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.638181 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.638643 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.641776 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.642381 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.643777 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.644321 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.645336 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.645878 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.646557 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.647776 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.648543 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.656103 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.682614 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.702763 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.722578 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.735267 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e"} Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.735323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c"} Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.735339 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"31aa3b0d4e294428f7b07c4ca0116c76d324a58c2d2e3e4ab4a6eb2f85950abf"} Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.736280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"167ece96b9d39cecc5958e3696c9c2e18bbf941b1a1091a7d99d62608fe3351d"} Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.738221 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5"} Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.738258 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c553619e4eebbb98847358e17f43d5fd83f4ce16ef2bdba057842b34ee4ed3b2"} Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.751931 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.766117 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.783813 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.799552 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.820579 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.840483 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.859631 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.881344 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.903618 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.944212 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:11 crc kubenswrapper[4787]: I0126 17:44:11.963155 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.195647 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.195713 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.195830 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.195884 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:14.195871636 +0000 UTC m=+22.903007769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.195899 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:44:14.195893017 +0000 UTC m=+22.903029150 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.204838 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p5jzw"] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.205232 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.207554 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.207934 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.209415 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.222729 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.236170 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.252344 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.266457 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.279771 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.293023 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.296399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.296449 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hlp\" (UniqueName: \"kubernetes.io/projected/44f46132-8fcb-4066-9925-e9245a901928-kube-api-access-p6hlp\") pod \"node-resolver-p5jzw\" (UID: \"44f46132-8fcb-4066-9925-e9245a901928\") " pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.296533 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.296556 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.296583 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/44f46132-8fcb-4066-9925-e9245a901928-hosts-file\") pod \"node-resolver-p5jzw\" (UID: \"44f46132-8fcb-4066-9925-e9245a901928\") " pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296608 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296649 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296661 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296666 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296719 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:14.296703828 +0000 UTC m=+23.003839961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296737 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:14.296728479 +0000 UTC m=+23.003864602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296808 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296858 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296872 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.296973 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:14.296930014 +0000 UTC m=+23.004066147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.309421 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.326198 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.340644 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.351929 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 17:39:11 +0000 UTC, rotation deadline is 2026-11-14 11:19:26.626759648 +0000 UTC Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.352012 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7001h35m14.274751901s for next certificate rotation Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.397403 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hlp\" (UniqueName: \"kubernetes.io/projected/44f46132-8fcb-4066-9925-e9245a901928-kube-api-access-p6hlp\") pod \"node-resolver-p5jzw\" (UID: \"44f46132-8fcb-4066-9925-e9245a901928\") " pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.397463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/44f46132-8fcb-4066-9925-e9245a901928-hosts-file\") pod \"node-resolver-p5jzw\" (UID: \"44f46132-8fcb-4066-9925-e9245a901928\") " pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.397538 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/44f46132-8fcb-4066-9925-e9245a901928-hosts-file\") pod \"node-resolver-p5jzw\" (UID: \"44f46132-8fcb-4066-9925-e9245a901928\") " pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.421846 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hlp\" (UniqueName: \"kubernetes.io/projected/44f46132-8fcb-4066-9925-e9245a901928-kube-api-access-p6hlp\") pod \"node-resolver-p5jzw\" (UID: \"44f46132-8fcb-4066-9925-e9245a901928\") " pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.454810 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.515662 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p5jzw" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.524690 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-slcv9"] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.530798 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-65mpd"] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.531119 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6x4t8"] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.531136 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.531839 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.534007 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.534728 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.535528 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544428 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544470 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544508 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544534 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544571 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544663 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544784 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544768 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:18:39.986851395 +0000 UTC Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.544912 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.545009 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.545082 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.545328 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.556701 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.576318 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.588764 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.588797 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.588747 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.588915 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.589215 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:12 crc kubenswrapper[4787]: E0126 17:44:12.589465 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.593120 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599323 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-socket-dir-parent\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599367 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-daemon-config\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrcqg\" (UniqueName: \"kubernetes.io/projected/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-kube-api-access-qrcqg\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599417 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-k8s-cni-cncf-io\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599531 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ec33f96-57e2-438c-83d4-943e0782ca1f-cni-binary-copy\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599591 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-cni-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599614 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-multus-certs\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599634 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-etc-kubernetes\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ec33f96-57e2-438c-83d4-943e0782ca1f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599712 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/418f020a-c193-4323-a29a-59c3ad0f1d35-proxy-tls\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-hostroot\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599772 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-os-release\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5f4\" (UniqueName: \"kubernetes.io/projected/418f020a-c193-4323-a29a-59c3ad0f1d35-kube-api-access-gz5f4\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599872 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-netns\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599929 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-kubelet\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.599977 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600008 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/418f020a-c193-4323-a29a-59c3ad0f1d35-rootfs\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600039 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-system-cni-dir\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600065 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-cni-binary-copy\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600092 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-cni-multus\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-cnibin\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600222 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/418f020a-c193-4323-a29a-59c3ad0f1d35-mcd-auth-proxy-config\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600252 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-cni-bin\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600277 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-conf-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600308 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpr2n\" (UniqueName: \"kubernetes.io/projected/2ec33f96-57e2-438c-83d4-943e0782ca1f-kube-api-access-qpr2n\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600333 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-system-cni-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-cnibin\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.600392 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-os-release\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.613321 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.630988 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.650128 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.669032 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.691990 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.697509 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701324 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-os-release\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5f4\" (UniqueName: \"kubernetes.io/projected/418f020a-c193-4323-a29a-59c3ad0f1d35-kube-api-access-gz5f4\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-netns\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701419 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-kubelet\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/418f020a-c193-4323-a29a-59c3ad0f1d35-rootfs\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701479 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-system-cni-dir\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701494 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-cni-binary-copy\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701509 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-cni-multus\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701524 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-cnibin\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/418f020a-c193-4323-a29a-59c3ad0f1d35-mcd-auth-proxy-config\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701554 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-cni-bin\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701571 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-conf-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpr2n\" (UniqueName: \"kubernetes.io/projected/2ec33f96-57e2-438c-83d4-943e0782ca1f-kube-api-access-qpr2n\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701612 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-system-cni-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701627 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-cnibin\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701642 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-os-release\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-socket-dir-parent\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-daemon-config\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrcqg\" (UniqueName: \"kubernetes.io/projected/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-kube-api-access-qrcqg\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-k8s-cni-cncf-io\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701779 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ec33f96-57e2-438c-83d4-943e0782ca1f-cni-binary-copy\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701794 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-cni-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701811 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-multus-certs\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-etc-kubernetes\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701842 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ec33f96-57e2-438c-83d4-943e0782ca1f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701864 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/418f020a-c193-4323-a29a-59c3ad0f1d35-proxy-tls\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701889 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-hostroot\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.701976 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-hostroot\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-k8s-cni-cncf-io\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702245 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-os-release\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702206 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-os-release\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702293 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-system-cni-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702328 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-cnibin\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702328 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-socket-dir-parent\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702368 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-system-cni-dir\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702620 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-cni-multus\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702656 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-kubelet\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-netns\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702701 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-var-lib-cni-bin\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702744 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-host-run-multus-certs\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.702719 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-conf-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703072 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703110 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/418f020a-c193-4323-a29a-59c3ad0f1d35-rootfs\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703190 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-daemon-config\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703240 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-multus-cni-dir\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703243 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-etc-kubernetes\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ec33f96-57e2-438c-83d4-943e0782ca1f-cnibin\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703737 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ec33f96-57e2-438c-83d4-943e0782ca1f-cni-binary-copy\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703737 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/418f020a-c193-4323-a29a-59c3ad0f1d35-mcd-auth-proxy-config\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703823 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ec33f96-57e2-438c-83d4-943e0782ca1f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.703942 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-cni-binary-copy\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.708159 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.708334 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/418f020a-c193-4323-a29a-59c3ad0f1d35-proxy-tls\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.722730 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrcqg\" (UniqueName: \"kubernetes.io/projected/d2e50ad1-82f9-48f0-a103-6d584a3fa02e-kube-api-access-qrcqg\") pod \"multus-65mpd\" (UID: \"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\") " pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.723785 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.724680 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5f4\" (UniqueName: \"kubernetes.io/projected/418f020a-c193-4323-a29a-59c3ad0f1d35-kube-api-access-gz5f4\") pod \"machine-config-daemon-6x4t8\" (UID: \"418f020a-c193-4323-a29a-59c3ad0f1d35\") " pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.729474 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpr2n\" (UniqueName: \"kubernetes.io/projected/2ec33f96-57e2-438c-83d4-943e0782ca1f-kube-api-access-qpr2n\") pod \"multus-additional-cni-plugins-slcv9\" (UID: \"2ec33f96-57e2-438c-83d4-943e0782ca1f\") " pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.742294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p5jzw" event={"ID":"44f46132-8fcb-4066-9925-e9245a901928","Type":"ContainerStarted","Data":"0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6"} Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.742363 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p5jzw" event={"ID":"44f46132-8fcb-4066-9925-e9245a901928","Type":"ContainerStarted","Data":"ae1f6939f552906fa48dd9915609b876bfcbd36e3c10ad7e490ffb68cb23dcb7"} Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.744031 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.759709 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.762026 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.781111 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.790392 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.795062 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.813315 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.825241 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.836239 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.839681 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.852449 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.854683 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-slcv9" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.856604 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.859797 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.864883 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.868444 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-65mpd" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.871971 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.881335 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.887317 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.903233 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: W0126 17:44:12.905191 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod418f020a_c193_4323_a29a_59c3ad0f1d35.slice/crio-f21e767ccf24c2a3a645b5ab2995ad8017499e09912e12307a97c8af9b63d620 WatchSource:0}: Error finding container f21e767ccf24c2a3a645b5ab2995ad8017499e09912e12307a97c8af9b63d620: Status 404 returned error can't find the container with id f21e767ccf24c2a3a645b5ab2995ad8017499e09912e12307a97c8af9b63d620 Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.917026 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cpbtq"] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.917869 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.920565 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.920595 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.920567 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.920726 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.920789 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.921594 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.922245 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.925461 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.939438 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.942095 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.955405 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.956445 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.957205 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.958997 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.961019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.961073 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.961083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.961243 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.970624 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.971215 4787 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.971558 4787 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.972874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.972924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.972935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.972970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.972984 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:12Z","lastTransitionTime":"2026-01-26T17:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:12 crc kubenswrapper[4787]: I0126 17:44:12.994927 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: E0126 17:44:13.001439 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004605 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004733 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-var-lib-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-node-log\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004786 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdfz\" (UniqueName: \"kubernetes.io/projected/474c6821-f8c5-400e-a584-0d63c13e0655-kube-api-access-2jdfz\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-systemd\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004860 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-etc-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-bin\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004904 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-env-overrides\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.004982 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-script-lib\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005003 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-log-socket\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005021 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-netns\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-ovn-kubernetes\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005052 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-netd\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005079 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-slash\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005100 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/474c6821-f8c5-400e-a584-0d63c13e0655-ovn-node-metrics-cert\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-kubelet\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-systemd-units\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-config\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.005182 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-ovn\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.017463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.017532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.017545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.017562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.017577 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.043092 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: E0126 17:44:13.063432 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.077341 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.077386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.077399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.077431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.077445 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.092426 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: E0126 17:44:13.100231 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108674 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-env-overrides\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108731 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-systemd\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108753 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-etc-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108774 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-bin\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108796 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-script-lib\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108849 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-log-socket\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108873 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-netd\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108894 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-netns\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-ovn-kubernetes\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.108975 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-slash\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109011 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/474c6821-f8c5-400e-a584-0d63c13e0655-ovn-node-metrics-cert\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109038 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-kubelet\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-systemd-units\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109067 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-netd\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109089 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-config\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-ovn-kubernetes\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109202 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-netns\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109238 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-slash\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-systemd\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109314 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-etc-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109354 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-bin\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109126 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-ovn\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109402 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-ovn\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109500 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109535 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-var-lib-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109606 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-node-log\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109628 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jdfz\" (UniqueName: \"kubernetes.io/projected/474c6821-f8c5-400e-a584-0d63c13e0655-kube-api-access-2jdfz\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.109997 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-script-lib\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.110050 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-log-socket\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.110085 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-var-lib-openvswitch\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.110106 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-node-log\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.110142 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-systemd-units\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.110172 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-kubelet\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.110595 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-config\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.111048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-env-overrides\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.118014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.118072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.118082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.118102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.118134 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.121145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/474c6821-f8c5-400e-a584-0d63c13e0655-ovn-node-metrics-cert\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.134186 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.156813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jdfz\" (UniqueName: \"kubernetes.io/projected/474c6821-f8c5-400e-a584-0d63c13e0655-kube-api-access-2jdfz\") pod \"ovnkube-node-cpbtq\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: E0126 17:44:13.174525 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.181686 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.183170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.183289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.183368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.183514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.183597 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.207343 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: E0126 17:44:13.209511 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: E0126 17:44:13.209651 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.211718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.211751 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.211760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.211779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.211794 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.225042 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.252857 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.258875 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: W0126 17:44:13.268469 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod474c6821_f8c5_400e_a584_0d63c13e0655.slice/crio-cf5ac270d0348bf884b300058e024f3784459a331a22fec9c385f9c59ee5a5c8 WatchSource:0}: Error finding container cf5ac270d0348bf884b300058e024f3784459a331a22fec9c385f9c59ee5a5c8: Status 404 returned error can't find the container with id cf5ac270d0348bf884b300058e024f3784459a331a22fec9c385f9c59ee5a5c8 Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.279516 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.298184 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.314293 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.314372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.314390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.314419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.314440 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.325046 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.342550 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.368880 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.390983 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.404986 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.417139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.417180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.417190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.417205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.417215 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.423852 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.437849 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.452601 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.466387 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.483283 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.503864 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.519196 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.520022 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.520066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.520080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.520100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.520116 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.538871 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.545281 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:03:17.850152231 +0000 UTC Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.580916 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.622836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.622894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.622908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.622926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.622940 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.726446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.726500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.726523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.726544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.726557 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.747729 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82" exitCode=0 Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.747826 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.748446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"cf5ac270d0348bf884b300058e024f3784459a331a22fec9c385f9c59ee5a5c8"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.751328 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.751357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.751369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"f21e767ccf24c2a3a645b5ab2995ad8017499e09912e12307a97c8af9b63d620"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.753406 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.754857 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerStarted","Data":"6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.754917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerStarted","Data":"042f856ff716e805b06d4ab2cef3b1a3583d9244f2054cff97986e2d55ada5c7"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.756740 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ec33f96-57e2-438c-83d4-943e0782ca1f" containerID="7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6" exitCode=0 Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.756834 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerDied","Data":"7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.756889 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerStarted","Data":"4fa49c598138512e6215f3652ff0df5da94c0c95a751ab71fbad200cd8285e62"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.786732 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.810439 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.827110 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.833806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.833885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.833898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.833970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.833989 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.840570 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.856016 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.879417 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.897912 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.917487 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.938423 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.940380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.940423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.940441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.940461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.940473 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:13Z","lastTransitionTime":"2026-01-26T17:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:13 crc kubenswrapper[4787]: I0126 17:44:13.979554 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.020228 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.043265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.043314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.043323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.043340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.043351 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.059230 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.097194 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.138553 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.147487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.147568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.147581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.147624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.147641 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.183544 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.220005 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.222311 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.222558 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:44:18.222488768 +0000 UTC m=+26.929624901 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.222627 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.222822 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.222911 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:18.222890889 +0000 UTC m=+26.930027022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.251792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.251856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.251870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.251894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.251910 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.260560 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.294974 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.323368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.323421 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.323456 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323561 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323606 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323609 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323629 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323637 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323644 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323655 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323676 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:18.323640939 +0000 UTC m=+27.030777072 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323705 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:18.3236876 +0000 UTC m=+27.030823733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.323727 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:18.323717491 +0000 UTC m=+27.030853624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.335325 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.355207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.355257 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.355268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.355287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.355300 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.378238 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.417509 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.458449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.458486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.458496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.458512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.458523 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.458613 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.500209 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.543058 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.546064 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:26:25.666130432 +0000 UTC Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.561522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.561574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.561588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.561613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.561627 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.577709 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.588398 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.588524 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.588602 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.588544 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.588791 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:14 crc kubenswrapper[4787]: E0126 17:44:14.588935 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.625682 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.659372 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.669339 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.669383 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.669394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.669416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.669429 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.733424 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.771404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.771446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerStarted","Data":"d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.771467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.771572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.771602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.771616 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.775015 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.775049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.775059 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.784527 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.813043 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.833765 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.860256 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.873905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.874046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.874062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.874079 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.874090 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.897404 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.939734 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.975844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.976001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.976080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.976204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.976298 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:14Z","lastTransitionTime":"2026-01-26T17:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:14 crc kubenswrapper[4787]: I0126 17:44:14.979456 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.018640 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.056437 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.079846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.079882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.079892 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.079908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.079919 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.095673 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.143859 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.178063 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.182807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.182849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.182858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.182872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.182882 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.216482 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.256762 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.286606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.286669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.286680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.286702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.286715 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.299534 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.337586 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.383501 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.390139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.390194 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.390209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.390231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.390247 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.421937 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.455809 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.493349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.493429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.493451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.493477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.493497 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.497579 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.537681 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.547125 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:10:15.026260091 +0000 UTC Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.579697 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.595089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.595139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.595150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.595169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.595181 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.618002 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.662339 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.697741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.697777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.697787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.697802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.697814 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.697970 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.735625 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.779198 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.782344 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ec33f96-57e2-438c-83d4-943e0782ca1f" containerID="d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6" exitCode=0 Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.782395 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerDied","Data":"d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.786873 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.786919 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.786935 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.800205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.800232 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.800243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.800259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.800271 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.822291 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.859738 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.899623 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.902299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.902344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.902353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.902372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.902383 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:15Z","lastTransitionTime":"2026-01-26T17:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.936656 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:15 crc kubenswrapper[4787]: I0126 17:44:15.982471 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.005416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.005475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.005484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.005500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.005509 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.017592 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.056332 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.096691 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.108650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.108690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.108699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.108719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.108728 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.138116 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.176810 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.211019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.211059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.211068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.211080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.211090 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.218355 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.261794 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.295795 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.314363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.314416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.314439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.314482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.314497 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.338627 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.373684 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.417576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.417616 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.417626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.417642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.417652 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.520766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.520823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.520838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.520861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.520875 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.548155 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:27:54.561956888 +0000 UTC Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.588699 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.588756 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:16 crc kubenswrapper[4787]: E0126 17:44:16.588823 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.588936 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:16 crc kubenswrapper[4787]: E0126 17:44:16.588998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:16 crc kubenswrapper[4787]: E0126 17:44:16.589029 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.623204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.623258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.623271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.623296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.623320 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.699621 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tqrnl"] Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.700299 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.702428 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.702811 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.704478 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.705751 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.721128 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.725621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.725659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.725679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.725701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.725714 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.736444 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.757661 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.771190 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.783932 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.792876 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ec33f96-57e2-438c-83d4-943e0782ca1f" containerID="4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f" exitCode=0 Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.792929 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerDied","Data":"4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.813966 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.828622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.828668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.828679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.828695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.828710 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.835996 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.851166 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.852923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6cv\" (UniqueName: \"kubernetes.io/projected/50235fa2-913c-4797-a4e3-6bf92f998335-kube-api-access-jj6cv\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.853015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/50235fa2-913c-4797-a4e3-6bf92f998335-serviceca\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.853043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50235fa2-913c-4797-a4e3-6bf92f998335-host\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.864875 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.878835 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.897870 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.931927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.931982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.931993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.932010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.932022 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:16Z","lastTransitionTime":"2026-01-26T17:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.943911 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.953750 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6cv\" (UniqueName: \"kubernetes.io/projected/50235fa2-913c-4797-a4e3-6bf92f998335-kube-api-access-jj6cv\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.953840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/50235fa2-913c-4797-a4e3-6bf92f998335-serviceca\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.953901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50235fa2-913c-4797-a4e3-6bf92f998335-host\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.954388 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50235fa2-913c-4797-a4e3-6bf92f998335-host\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.955925 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/50235fa2-913c-4797-a4e3-6bf92f998335-serviceca\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:16 crc kubenswrapper[4787]: I0126 17:44:16.975175 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.005911 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6cv\" (UniqueName: \"kubernetes.io/projected/50235fa2-913c-4797-a4e3-6bf92f998335-kube-api-access-jj6cv\") pod \"node-ca-tqrnl\" (UID: \"50235fa2-913c-4797-a4e3-6bf92f998335\") " pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.022184 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tqrnl" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.034469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.034529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.034541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.034560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.034571 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.036980 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: W0126 17:44:17.037878 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50235fa2_913c_4797_a4e3_6bf92f998335.slice/crio-f962469323a15aa88610b19689d4315f9070156b14b8d9eb951ae78bdc206d80 WatchSource:0}: Error finding container f962469323a15aa88610b19689d4315f9070156b14b8d9eb951ae78bdc206d80: Status 404 returned error can't find the container with id f962469323a15aa88610b19689d4315f9070156b14b8d9eb951ae78bdc206d80 Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.076977 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.116560 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.137596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.137635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.137644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.137658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.137677 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.155878 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.196348 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.236892 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.243180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.243230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.243246 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.243271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.243287 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.275552 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.320560 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.347684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.347721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.347729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.347742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.347751 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.354534 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.397273 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.436546 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.450259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.450314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.450324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.450347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.450365 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.480321 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.513831 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.548601 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:45:40.842433808 +0000 UTC Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.553017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.553060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.553072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.553088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.553100 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.557394 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.597172 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.634803 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.655817 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.655872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.655885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.655907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.655921 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.681605 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.758497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.758543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.758554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.758570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.758582 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.799684 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ec33f96-57e2-438c-83d4-943e0782ca1f" containerID="f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068" exitCode=0 Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.799749 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerDied","Data":"f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.803043 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tqrnl" event={"ID":"50235fa2-913c-4797-a4e3-6bf92f998335","Type":"ContainerStarted","Data":"1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.803561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tqrnl" event={"ID":"50235fa2-913c-4797-a4e3-6bf92f998335","Type":"ContainerStarted","Data":"f962469323a15aa88610b19689d4315f9070156b14b8d9eb951ae78bdc206d80"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.808693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.825302 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.839099 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.855655 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.863096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.863163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.863174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.863251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.863264 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.867791 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.882619 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.916345 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.957691 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.966070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.966145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.966159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.966184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.966199 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:17Z","lastTransitionTime":"2026-01-26T17:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:17 crc kubenswrapper[4787]: I0126 17:44:17.997282 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.041841 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.068495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.068538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.068553 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.068569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.068581 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.078307 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.119240 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.161662 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.171087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.171167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.171182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.171207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.171222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.198204 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.238859 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.268076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.268199 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:44:26.268177824 +0000 UTC m=+34.975313957 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.268263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.268359 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.268398 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:26.268391539 +0000 UTC m=+34.975527672 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.273688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.273717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.273726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.273749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.273759 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.286216 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.322344 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.359233 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.368749 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.368784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.368805 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.368908 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369021 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:26.369001625 +0000 UTC m=+35.076137758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.368916 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369085 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369099 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.368920 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369160 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:26.369141029 +0000 UTC m=+35.076277242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369165 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369181 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.369219 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:26.369207021 +0000 UTC m=+35.076343154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.376071 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.376121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.376134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.376153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.376164 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.397103 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.436367 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.476454 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.478161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.478237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.478252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.478276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.478290 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.526814 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.549704 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:50:55.58414946 +0000 UTC Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.557179 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.580460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.580504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.580517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.580534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.580546 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.588263 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.588411 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.588839 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.588933 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.589028 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:18 crc kubenswrapper[4787]: E0126 17:44:18.589091 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.605133 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.635156 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.674885 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.682712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.682882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.683035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.683185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.683306 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.717045 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.757926 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.786210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.786246 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.786258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.786275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.786286 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.799749 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.815197 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ec33f96-57e2-438c-83d4-943e0782ca1f" containerID="5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8" exitCode=0 Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.815250 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerDied","Data":"5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.840159 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.888802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.888873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.888892 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.888917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.888936 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.890865 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.919687 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.962141 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.992156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.992201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.992215 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.992235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.992250 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:18Z","lastTransitionTime":"2026-01-26T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:18 crc kubenswrapper[4787]: I0126 17:44:18.998000 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.045252 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.094626 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.108519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.108569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.108586 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.108608 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.108626 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.143040 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.159019 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.197489 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.211894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.211970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.211984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.212000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.212011 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.239726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.277004 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.315269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.315371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.315386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.315404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.315416 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.317040 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.359614 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.398251 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.418259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.418308 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.418318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.418338 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.418349 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.437247 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.479321 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.521648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.521686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.521695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.521710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.521720 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.550096 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:50:31.336863548 +0000 UTC Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.623832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.623871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.623881 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.623897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.623910 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.727689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.727733 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.727747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.727766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.727779 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.825093 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ec33f96-57e2-438c-83d4-943e0782ca1f" containerID="bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8" exitCode=0 Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.825152 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerDied","Data":"bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.830964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.831110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.831179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.831244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.831341 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.848188 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.864580 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.886604 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.904272 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.919041 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.933441 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.933921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.933988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.934002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.934020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.934032 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:19Z","lastTransitionTime":"2026-01-26T17:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.948355 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.961185 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.975401 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:19 crc kubenswrapper[4787]: I0126 17:44:19.990219 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.002882 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.015856 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.035265 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.037514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.037544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.037554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.037571 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.037582 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.049285 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.086386 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.141109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.141178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.141196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.141220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.141238 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.243590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.244046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.244099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.244121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.244135 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.347691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.347741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.347758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.347775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.347786 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.450783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.450822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.450830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.450847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.450856 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.550515 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:01:05.041510505 +0000 UTC Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.553421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.553454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.553462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.553475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.553484 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.589053 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.589049 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:20 crc kubenswrapper[4787]: E0126 17:44:20.589189 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.589189 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:20 crc kubenswrapper[4787]: E0126 17:44:20.589328 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:20 crc kubenswrapper[4787]: E0126 17:44:20.589492 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.655795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.655847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.655862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.655882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.655898 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.759156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.759204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.759218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.759238 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.759255 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.833796 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" event={"ID":"2ec33f96-57e2-438c-83d4-943e0782ca1f","Type":"ContainerStarted","Data":"ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.843104 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.851540 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.861369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.861410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.861419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.861433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.861442 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.866329 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.879363 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.894498 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.905816 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.917924 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.933058 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.956427 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.964492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.964541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.964552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.964569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.964773 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:20Z","lastTransitionTime":"2026-01-26T17:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.967874 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.981664 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:20 crc kubenswrapper[4787]: I0126 17:44:20.995162 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.009408 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.031244 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.046643 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.062503 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.066918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.066968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.066976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.066992 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.067003 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.084306 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.095862 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.110528 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.125492 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.139674 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.152784 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.163634 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.168980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.169012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.169027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.169046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.169058 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.183099 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.195278 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.207576 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.219407 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.231259 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.242061 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.255449 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.270875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.270901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.270910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.270921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.270932 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.272502 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.373422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.378056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.378078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.378094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.378104 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.480132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.480165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.480175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.480190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.480200 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.551215 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:07:53.500244554 +0000 UTC Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.582304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.582364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.582376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.582392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.582407 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.610367 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.627562 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.672778 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.684213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.684256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.684270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.684286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.684298 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.688778 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.719568 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.739594 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.757135 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.773062 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.787576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.787624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.787636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.787651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.787661 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.792183 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.806980 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.816543 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.833968 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.845733 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.846095 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.846157 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.848605 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.865193 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.873616 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.875274 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.880784 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.889824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.889854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.889862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.889877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.889888 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.916597 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.956857 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.991872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.991915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.991924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.991940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.991964 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:21Z","lastTransitionTime":"2026-01-26T17:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:21 crc kubenswrapper[4787]: I0126 17:44:21.994811 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.041198 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.081992 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.094858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.094907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.094920 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.094936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.094967 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.117438 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.159025 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.194591 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.197321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.197352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.197362 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.197375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.197384 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.238519 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.277896 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.299630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.299680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.299690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.299704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.299713 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.314361 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.365881 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.394941 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.401293 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.401343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.401356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.401375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.401388 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.437128 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.474448 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.504333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.504388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.504403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.504431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.504451 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.551539 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:51:17.917060022 +0000 UTC Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.589236 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:22 crc kubenswrapper[4787]: E0126 17:44:22.589369 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.589713 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:22 crc kubenswrapper[4787]: E0126 17:44:22.590229 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.589728 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:22 crc kubenswrapper[4787]: E0126 17:44:22.590377 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.607439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.607489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.607505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.607523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.607535 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.710151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.710188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.710205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.710227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.710243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.812639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.812692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.812703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.812719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.812730 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.850317 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/0.log" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.852964 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69" exitCode=1 Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.853024 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.853640 4787 scope.go:117] "RemoveContainer" containerID="f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.866416 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.888649 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.900800 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.915093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.915138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.915151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.915169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.915182 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:22Z","lastTransitionTime":"2026-01-26T17:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.918792 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.935282 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.949048 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:22 crc kubenswrapper[4787]: I0126 17:44:22.959736 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.001253 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:22Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:22.226087 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 17:44:22.226116 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 17:44:22.226144 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:22.226150 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 17:44:22.226163 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:22.226152 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 17:44:22.226173 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 17:44:22.226188 6081 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 17:44:22.226210 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 17:44:22.226214 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:22.226252 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 17:44:22.226265 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 17:44:22.226281 6081 factory.go:656] Stopping watch factory\\\\nI0126 17:44:22.226291 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:22.226294 6081 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.014102 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.021243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.021267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.021276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.021291 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.021301 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.025915 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.036809 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.048771 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.060076 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.072258 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.081288 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.129703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.129754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.129765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.129782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.129794 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.231934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.231984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.231994 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.232008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.232019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.263373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.263410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.263419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.263432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.263442 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: E0126 17:44:23.276140 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.282202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.282248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.282263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.282279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.282290 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: E0126 17:44:23.293224 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.296987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.297036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.297049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.297065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.297077 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: E0126 17:44:23.308414 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.311690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.311721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.311732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.311747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.311760 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: E0126 17:44:23.322490 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.326255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.326291 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.326300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.326315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.326326 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: E0126 17:44:23.337974 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: E0126 17:44:23.338318 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.339593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.339623 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.339635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.339650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.339661 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.442317 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.442544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.442648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.442736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.442812 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.546050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.546112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.546132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.546154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.546171 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.551720 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:19:27.447692697 +0000 UTC Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.649150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.649268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.649331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.649434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.649498 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.752731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.752768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.752778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.752791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.752802 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.856611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.856649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.856662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.856701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.856717 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.862861 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/0.log" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.865734 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.865872 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.879390 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.890233 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.901088 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.914211 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.944357 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.960282 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.960335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.960356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.960379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.960396 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:23Z","lastTransitionTime":"2026-01-26T17:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.963821 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:23 crc kubenswrapper[4787]: I0126 17:44:23.983812 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.008621 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.034363 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:22Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:22.226087 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 17:44:22.226116 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 17:44:22.226144 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:22.226150 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 17:44:22.226163 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:22.226152 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 17:44:22.226173 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 17:44:22.226188 6081 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 17:44:22.226210 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 17:44:22.226214 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:22.226252 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 17:44:22.226265 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 17:44:22.226281 6081 factory.go:656] Stopping watch factory\\\\nI0126 17:44:22.226291 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:22.226294 6081 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.057660 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.062675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.062729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.062743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.062764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.062777 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.070984 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n"] Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.071450 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.073795 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.074173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.074467 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.086601 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.099302 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.110166 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.127169 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.147659 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.159592 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.164652 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.164689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.164697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.164711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.164722 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.173127 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.185124 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.196611 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.211760 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.223686 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.232310 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wdh\" (UniqueName: \"kubernetes.io/projected/98cea2bc-3771-492e-8944-f87958ff034a-kube-api-access-96wdh\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.232360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98cea2bc-3771-492e-8944-f87958ff034a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.232423 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98cea2bc-3771-492e-8944-f87958ff034a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.232473 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98cea2bc-3771-492e-8944-f87958ff034a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.234333 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.255445 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:22Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:22.226087 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 17:44:22.226116 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 17:44:22.226144 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:22.226150 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 17:44:22.226163 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:22.226152 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 17:44:22.226173 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 17:44:22.226188 6081 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 17:44:22.226210 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 17:44:22.226214 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:22.226252 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 17:44:22.226265 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 17:44:22.226281 6081 factory.go:656] Stopping watch factory\\\\nI0126 17:44:22.226291 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:22.226294 6081 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.267565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.267625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.267645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.267671 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.267689 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.268976 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.284376 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.299388 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.314031 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.332519 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.333386 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98cea2bc-3771-492e-8944-f87958ff034a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.333553 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wdh\" (UniqueName: \"kubernetes.io/projected/98cea2bc-3771-492e-8944-f87958ff034a-kube-api-access-96wdh\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.333636 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98cea2bc-3771-492e-8944-f87958ff034a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.333720 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98cea2bc-3771-492e-8944-f87958ff034a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.334560 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98cea2bc-3771-492e-8944-f87958ff034a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.334808 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98cea2bc-3771-492e-8944-f87958ff034a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.348037 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98cea2bc-3771-492e-8944-f87958ff034a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.352580 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.371363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.371413 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.371429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.371460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.371475 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.373303 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wdh\" (UniqueName: \"kubernetes.io/projected/98cea2bc-3771-492e-8944-f87958ff034a-kube-api-access-96wdh\") pod \"ovnkube-control-plane-749d76644c-b992n\" (UID: \"98cea2bc-3771-492e-8944-f87958ff034a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.381064 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.390084 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" Jan 26 17:44:24 crc kubenswrapper[4787]: W0126 17:44:24.403593 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98cea2bc_3771_492e_8944_f87958ff034a.slice/crio-9bdfc64d7c4ebdebab8ad5d8f6ccce5863e6cfdf38fa65345872c27896a548ee WatchSource:0}: Error finding container 9bdfc64d7c4ebdebab8ad5d8f6ccce5863e6cfdf38fa65345872c27896a548ee: Status 404 returned error can't find the container with id 9bdfc64d7c4ebdebab8ad5d8f6ccce5863e6cfdf38fa65345872c27896a548ee Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.476690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.476731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.476741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.476756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.476764 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.552234 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:56:14.096626492 +0000 UTC Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.578830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.578871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.578882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.578897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.578908 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.588915 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.588968 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.588932 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:24 crc kubenswrapper[4787]: E0126 17:44:24.589079 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:24 crc kubenswrapper[4787]: E0126 17:44:24.589182 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:24 crc kubenswrapper[4787]: E0126 17:44:24.589242 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.681539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.681582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.681590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.681604 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.681614 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.784563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.784610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.784627 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.784649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.784666 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.871174 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/1.log" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.872064 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/0.log" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.875286 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c" exitCode=1 Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.875317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.875384 4787 scope.go:117] "RemoveContainer" containerID="f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.876074 4787 scope.go:117] "RemoveContainer" containerID="c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c" Jan 26 17:44:24 crc kubenswrapper[4787]: E0126 17:44:24.876310 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.876915 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" event={"ID":"98cea2bc-3771-492e-8944-f87958ff034a","Type":"ContainerStarted","Data":"9bdfc64d7c4ebdebab8ad5d8f6ccce5863e6cfdf38fa65345872c27896a548ee"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.889612 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.889660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.889678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.889699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.889715 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.893018 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.910536 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.925783 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.939385 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.953298 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.968519 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.979527 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.993794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.993862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.993879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.993899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.993912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:24Z","lastTransitionTime":"2026-01-26T17:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:24 crc kubenswrapper[4787]: I0126 17:44:24.993814 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.015899 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.031976 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.047473 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.057461 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.071969 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.085545 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.097611 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.099190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.099242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.099252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.099269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.099281 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.117509 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:22Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:22.226087 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 17:44:22.226116 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 17:44:22.226144 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:22.226150 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 17:44:22.226163 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:22.226152 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 17:44:22.226173 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 17:44:22.226188 6081 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 17:44:22.226210 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 17:44:22.226214 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:22.226252 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 17:44:22.226265 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 17:44:22.226281 6081 factory.go:656] Stopping watch factory\\\\nI0126 17:44:22.226291 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:22.226294 6081 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.201886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.201965 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.201975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.201991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.202002 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.304798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.304841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.304853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.304875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.304891 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.408003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.408053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.408274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.408299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.408312 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.510877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.510916 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.510925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.510939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.510966 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.553246 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:18:44.745369201 +0000 UTC Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.613862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.613899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.613909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.613923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.613938 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.716335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.716378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.716390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.716416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.716430 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.820088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.820147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.820168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.820198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.820218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.883124 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" event={"ID":"98cea2bc-3771-492e-8944-f87958ff034a","Type":"ContainerStarted","Data":"c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.883185 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" event={"ID":"98cea2bc-3771-492e-8944-f87958ff034a","Type":"ContainerStarted","Data":"8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.886081 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/1.log" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.890274 4787 scope.go:117] "RemoveContainer" containerID="c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c" Jan 26 17:44:25 crc kubenswrapper[4787]: E0126 17:44:25.890403 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.901993 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.915414 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vkdfd"] Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.916318 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:25 crc kubenswrapper[4787]: E0126 17:44:25.916461 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.918260 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.922903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.922964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.922976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.923025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.923036 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:25Z","lastTransitionTime":"2026-01-26T17:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.933121 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.953524 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.967269 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.980797 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.989971 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:25 crc kubenswrapper[4787]: I0126 17:44:25.999703 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.018101 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.025695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.025738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.025747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.025762 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.025773 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.046284 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d25ca8151be19b0cac3ed5377a21f9956480ac27890b15ceeda88c0910af69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:22Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:22.226087 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 17:44:22.226116 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 17:44:22.226144 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:22.226150 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 17:44:22.226163 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:22.226152 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 17:44:22.226173 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 17:44:22.226188 6081 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 17:44:22.226210 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 17:44:22.226214 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:22.226252 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 17:44:22.226265 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 17:44:22.226281 6081 factory.go:656] Stopping watch factory\\\\nI0126 17:44:22.226291 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:22.226294 6081 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.050457 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvrhw\" (UniqueName: \"kubernetes.io/projected/f04b2906-5567-4455-a1e8-5d85d5ea882e-kube-api-access-mvrhw\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.050546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.061165 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.075847 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.087104 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.102771 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.117848 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.128765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.128808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.128824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.128840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.128851 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.130386 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.141219 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.151963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.152017 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvrhw\" (UniqueName: \"kubernetes.io/projected/f04b2906-5567-4455-a1e8-5d85d5ea882e-kube-api-access-mvrhw\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.152125 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.152177 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:44:26.652164285 +0000 UTC m=+35.359300418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.153073 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.164660 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.171026 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvrhw\" (UniqueName: \"kubernetes.io/projected/f04b2906-5567-4455-a1e8-5d85d5ea882e-kube-api-access-mvrhw\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.183154 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.200139 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.216349 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.227649 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.231363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.231407 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.231418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.231436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.231450 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.237742 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.246587 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.257372 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.267218 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.278197 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.289268 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.300573 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.311349 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.324355 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.333844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.333882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.333891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.333905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.333914 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.353972 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.354120 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.354217 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:44:42.354185548 +0000 UTC m=+51.061321691 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.354235 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.354298 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:42.35428129 +0000 UTC m=+51.061417443 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.356886 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.435874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.435943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.435964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.435989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.435999 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.455578 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.455615 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.455647 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.455788 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.455808 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.455827 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.455884 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:42.455866882 +0000 UTC m=+51.163003015 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.455829 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.455787 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.456169 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.456181 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.456143 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:42.456107367 +0000 UTC m=+51.163243540 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.456229 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:44:42.45622003 +0000 UTC m=+51.163356163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.538935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.539030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.539045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.539069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.539084 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.554300 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:55:17.404651014 +0000 UTC Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.588212 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.588278 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.588335 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.588382 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.588498 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.588557 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.641318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.641361 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.641372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.641393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.641405 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.658541 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.658724 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: E0126 17:44:26.659075 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:44:27.658812738 +0000 UTC m=+36.365948861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.743905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.743979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.743991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.744006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.744020 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.847094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.847160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.847172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.847187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.847197 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.950074 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.950124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.950135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.950152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:26 crc kubenswrapper[4787]: I0126 17:44:26.950164 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:26Z","lastTransitionTime":"2026-01-26T17:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.053533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.053588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.053599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.053615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.053626 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.156361 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.156412 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.156427 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.156445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.156460 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.258677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.258715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.258724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.258740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.258750 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.361417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.361475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.361488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.361517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.361534 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.463824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.463872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.463882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.463901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.463912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.555029 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:13:13.478829083 +0000 UTC Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.567099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.567207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.567225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.567269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.567285 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.588732 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:27 crc kubenswrapper[4787]: E0126 17:44:27.588913 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.670204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.670236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.670245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.670258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.670268 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.671289 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:27 crc kubenswrapper[4787]: E0126 17:44:27.671441 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:27 crc kubenswrapper[4787]: E0126 17:44:27.671521 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:44:29.671502227 +0000 UTC m=+38.378638360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.772516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.772547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.772556 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.772569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.772578 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.874518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.874562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.874572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.874587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.874601 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.977879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.977941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.977977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.978000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:27 crc kubenswrapper[4787]: I0126 17:44:27.978021 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:27Z","lastTransitionTime":"2026-01-26T17:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.081226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.081295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.081317 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.081343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.081361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.184363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.184405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.184416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.184438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.184453 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.287477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.287542 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.287560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.287582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.287598 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.372915 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.374021 4787 scope.go:117] "RemoveContainer" containerID="c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c" Jan 26 17:44:28 crc kubenswrapper[4787]: E0126 17:44:28.374219 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.390046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.390112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.390129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.390155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.390171 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.492594 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.492632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.492643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.492662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.492676 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.556396 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:51:28.556411876 +0000 UTC Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.589061 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.589142 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.589063 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:28 crc kubenswrapper[4787]: E0126 17:44:28.589214 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:28 crc kubenswrapper[4787]: E0126 17:44:28.589282 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:28 crc kubenswrapper[4787]: E0126 17:44:28.589344 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.594856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.594906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.594930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.594990 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.595019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.697456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.697493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.697504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.697519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.697533 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.799790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.799820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.799828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.799840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.799851 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.902488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.902547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.902561 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.902580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:28 crc kubenswrapper[4787]: I0126 17:44:28.902593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:28Z","lastTransitionTime":"2026-01-26T17:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.005363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.005409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.005426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.005449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.005466 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.108055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.108114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.108123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.108138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.108148 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.210935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.211009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.211019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.211035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.211045 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.313299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.313372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.313392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.313417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.313435 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.416709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.416773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.416795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.416828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.416850 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.519554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.519612 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.519626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.519649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.519663 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.556902 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:22:25.263058851 +0000 UTC Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.589478 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:29 crc kubenswrapper[4787]: E0126 17:44:29.589732 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.622342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.622405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.622419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.622439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.622454 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.695221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:29 crc kubenswrapper[4787]: E0126 17:44:29.695428 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:29 crc kubenswrapper[4787]: E0126 17:44:29.695547 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:44:33.695517084 +0000 UTC m=+42.402653227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.725112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.725149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.725158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.725171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.725181 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.828233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.828310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.828332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.828362 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.828386 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.930588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.930631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.930643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.930661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:29 crc kubenswrapper[4787]: I0126 17:44:29.930673 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:29Z","lastTransitionTime":"2026-01-26T17:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.033507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.033566 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.033579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.033596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.033607 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.136460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.136504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.136515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.136530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.136540 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.239902 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.240028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.240055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.240085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.240109 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.342398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.342441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.342450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.342464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.342477 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.445153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.445221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.445235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.445253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.445267 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.548759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.548818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.548830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.548847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.548859 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.557396 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:54:24.456075149 +0000 UTC Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.588700 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.588723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.588711 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:30 crc kubenswrapper[4787]: E0126 17:44:30.588894 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:30 crc kubenswrapper[4787]: E0126 17:44:30.588813 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:30 crc kubenswrapper[4787]: E0126 17:44:30.589161 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.651063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.651103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.651111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.651124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.651135 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.753312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.753356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.753367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.753383 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.753393 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.856627 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.856757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.856783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.856812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.856834 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.959785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.959836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.959853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.959876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:30 crc kubenswrapper[4787]: I0126 17:44:30.959892 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:30Z","lastTransitionTime":"2026-01-26T17:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.063376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.063432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.063453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.063485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.063506 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.166825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.166863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.166877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.166896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.166909 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.269894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.270004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.270032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.270062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.270085 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.373003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.373046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.373055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.373070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.373082 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.475115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.475160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.475174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.475195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.475213 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.557995 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:37:36.160681167 +0000 UTC Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.578396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.578525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.578541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.578602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.578617 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.588495 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:31 crc kubenswrapper[4787]: E0126 17:44:31.588633 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.603158 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.618000 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.633466 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.645586 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.666777 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.681927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.682046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.682081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.682098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.682108 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.686990 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.707549 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.725289 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.738150 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.751508 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.763682 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.777777 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.785002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.785037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.785045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.785097 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.785108 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.790388 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.803209 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.817868 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.830711 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.845387 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.887449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.887498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.887511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.887527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.887537 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.989865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.989917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.989926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.989942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:31 crc kubenswrapper[4787]: I0126 17:44:31.989989 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:31Z","lastTransitionTime":"2026-01-26T17:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.092421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.092459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.092469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.092484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.092495 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.195099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.195213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.195237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.195269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.195288 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.297760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.297818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.297843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.297871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.297893 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.401674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.401744 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.401764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.401792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.401815 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.505053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.505104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.505128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.505150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.505166 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.558663 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:44:09.614598395 +0000 UTC Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.589064 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.589112 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.589198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:32 crc kubenswrapper[4787]: E0126 17:44:32.589353 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:32 crc kubenswrapper[4787]: E0126 17:44:32.589885 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:32 crc kubenswrapper[4787]: E0126 17:44:32.590021 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.607276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.607323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.607334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.607350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.607362 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.710338 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.710393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.710407 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.710430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.710444 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.813345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.813531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.813571 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.813606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.813643 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.915848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.915905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.915913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.915929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:32 crc kubenswrapper[4787]: I0126 17:44:32.915941 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:32Z","lastTransitionTime":"2026-01-26T17:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.017927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.018005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.018013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.018028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.018043 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.121263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.121327 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.121342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.121364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.121382 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.224300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.224360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.224373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.224392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.224405 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.327518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.327563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.327574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.327590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.327600 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.430331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.430374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.430385 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.430400 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.430412 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.461120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.461153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.461162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.461175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.461183 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.476935 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.482492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.482554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.482577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.482604 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.482634 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.502221 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.506833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.506876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.506884 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.506899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.506913 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.521373 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.526046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.526113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.526130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.526155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.526176 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.544005 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.547429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.547482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.547494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.547510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.547546 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.560051 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:32:25.709349067 +0000 UTC Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.560374 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.560517 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.561999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.562057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.562077 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.562099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.562116 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.588334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.588478 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.670775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.670827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.670840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.670859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.670870 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.735401 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.735596 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:33 crc kubenswrapper[4787]: E0126 17:44:33.735710 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:44:41.735678539 +0000 UTC m=+50.442814712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.773473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.773520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.773532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.773548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.773559 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.876342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.876400 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.876416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.876437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.876453 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.981184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.981272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.981296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.981329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:33 crc kubenswrapper[4787]: I0126 17:44:33.981353 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:33Z","lastTransitionTime":"2026-01-26T17:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.084589 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.084656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.084673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.084696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.084712 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.209573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.209625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.209638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.209656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.209669 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.312828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.312891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.312909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.312932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.312976 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.415652 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.415715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.415731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.415760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.415780 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.518553 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.518614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.518631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.518655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.518673 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.560708 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:08:51.470499035 +0000 UTC Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.589139 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.589171 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:34 crc kubenswrapper[4787]: E0126 17:44:34.589282 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.589324 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:34 crc kubenswrapper[4787]: E0126 17:44:34.589477 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:34 crc kubenswrapper[4787]: E0126 17:44:34.589733 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.622064 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.622104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.622112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.622128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.622140 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.724430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.724466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.724479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.724494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.724504 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.827340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.827376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.827384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.827396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.827404 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.930746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.930780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.930790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.930809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:34 crc kubenswrapper[4787]: I0126 17:44:34.930825 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:34Z","lastTransitionTime":"2026-01-26T17:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.034102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.034160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.034180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.034211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.034231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.137787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.137835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.137846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.137862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.137873 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.240877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.241010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.241042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.241073 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.241096 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.343910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.344011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.344023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.344043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.344056 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.446693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.446736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.446754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.446774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.446787 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.550389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.550435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.550445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.550462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.550476 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.561561 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:33:34.896335073 +0000 UTC Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.589362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:35 crc kubenswrapper[4787]: E0126 17:44:35.589538 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.652566 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.652629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.652639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.652655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.652666 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.755790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.755858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.755880 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.755905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.755920 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.858324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.858371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.858382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.858401 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.858415 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.960763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.960818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.960827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.960842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:35 crc kubenswrapper[4787]: I0126 17:44:35.960852 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:35Z","lastTransitionTime":"2026-01-26T17:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.067670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.067720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.067732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.067749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.067762 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.170508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.170568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.170582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.170601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.170616 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.272810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.272869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.272887 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.272912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.272930 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.375613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.375680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.375695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.375710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.375720 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.478391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.478438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.478448 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.478463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.478475 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.562481 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:31:40.877354843 +0000 UTC Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.580747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.580815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.580837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.580866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.580888 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.588453 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.588587 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.588857 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:36 crc kubenswrapper[4787]: E0126 17:44:36.589106 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:36 crc kubenswrapper[4787]: E0126 17:44:36.589370 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:36 crc kubenswrapper[4787]: E0126 17:44:36.589566 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.684672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.684937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.685052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.685143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.685207 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.787719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.788013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.788086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.788156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.788212 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.890808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.890931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.891009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.891041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.891061 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.994069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.994141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.994158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.994184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:36 crc kubenswrapper[4787]: I0126 17:44:36.994204 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:36Z","lastTransitionTime":"2026-01-26T17:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.097286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.097353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.097365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.097381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.097392 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.201129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.201184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.201200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.201226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.201241 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.304478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.304542 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.304556 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.304579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.304599 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.407218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.407285 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.407307 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.407338 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.407361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.510712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.510759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.510770 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.510787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.510800 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.563418 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:26:12.50359672 +0000 UTC Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.588534 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:37 crc kubenswrapper[4787]: E0126 17:44:37.588730 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.614258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.614397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.614461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.614488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.614507 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.716653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.716709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.716725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.716750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.716767 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.819979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.820037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.820055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.820080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.820097 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.923250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.923294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.923304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.923321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:37 crc kubenswrapper[4787]: I0126 17:44:37.923332 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:37Z","lastTransitionTime":"2026-01-26T17:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.026629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.026693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.026711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.026742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.026765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.129861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.129921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.129935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.129979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.129995 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.233099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.233177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.233199 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.233225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.233271 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.336405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.336489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.336514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.336541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.336562 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.443600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.443650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.443666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.443686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.443701 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.546551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.546638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.546655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.546681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.546698 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.564021 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:45:54.452581813 +0000 UTC Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.588635 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.588668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.588649 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:38 crc kubenswrapper[4787]: E0126 17:44:38.588787 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:38 crc kubenswrapper[4787]: E0126 17:44:38.588854 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:38 crc kubenswrapper[4787]: E0126 17:44:38.589015 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.650607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.650672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.650684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.650700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.650713 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.753473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.753515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.753523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.753537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.753549 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.856261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.856312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.856321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.856335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.856345 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.958920 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.958989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.959002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.959025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:38 crc kubenswrapper[4787]: I0126 17:44:38.959039 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:38Z","lastTransitionTime":"2026-01-26T17:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.061896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.062002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.062013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.062032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.062042 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.165053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.165114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.165127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.165148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.165165 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.268003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.268060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.268072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.268089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.268103 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.371111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.371176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.371199 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.371229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.371251 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.474123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.474182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.474198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.474225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.474243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.564152 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:05:13.29874067 +0000 UTC Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.577326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.577412 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.577437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.577469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.577494 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.589103 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:39 crc kubenswrapper[4787]: E0126 17:44:39.589289 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.590285 4787 scope.go:117] "RemoveContainer" containerID="c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.680234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.680590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.680607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.680631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.680648 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.785753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.785820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.785841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.785869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.785894 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.889926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.890018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.890034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.890053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.890444 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.939837 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/1.log" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.943084 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5"} Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.944925 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.979499 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:39Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.993207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.993260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.993275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.993295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:39 crc kubenswrapper[4787]: I0126 17:44:39.993309 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:39Z","lastTransitionTime":"2026-01-26T17:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.000049 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:39Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.021713 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.041031 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.055669 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.073147 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.097288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.097345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.097357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.097378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.097390 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.098297 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.120787 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.143559 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.156810 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.170077 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.183474 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.195482 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.199641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.199684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.199697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.199756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.199772 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.455789 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.457809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.457860 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.457875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.457895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.457911 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.480041 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.492835 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.506085 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:40Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.560895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.560982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.560996 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.561019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.561032 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.564966 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:49:56.366238328 +0000 UTC Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.588378 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.588538 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.588610 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:40 crc kubenswrapper[4787]: E0126 17:44:40.588728 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:40 crc kubenswrapper[4787]: E0126 17:44:40.588935 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:40 crc kubenswrapper[4787]: E0126 17:44:40.589216 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.663041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.663081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.663090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.663106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.663116 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.765444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.765512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.765535 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.765561 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.765582 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.869182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.869253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.869452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.869477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.869495 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.973448 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.973518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.973532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.973554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:40 crc kubenswrapper[4787]: I0126 17:44:40.973571 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:40Z","lastTransitionTime":"2026-01-26T17:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.076579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.076707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.076726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.076747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.076761 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.179245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.179304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.179320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.179347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.179365 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.281500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.281555 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.281574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.281630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.281646 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.384297 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.384339 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.384351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.384368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.384380 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.487349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.487416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.487441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.487471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.487495 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.565274 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:13:02.202945904 +0000 UTC Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.588286 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:41 crc kubenswrapper[4787]: E0126 17:44:41.588419 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.592536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.592572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.592581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.592639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.592651 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.606131 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.622209 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.633864 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.662567 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.682940 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.694456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.694521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.694541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.694565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.694582 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.699768 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.718212 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.731176 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.746479 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.766474 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.772926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:41 crc kubenswrapper[4787]: E0126 17:44:41.773135 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:41 crc kubenswrapper[4787]: E0126 17:44:41.773223 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:44:57.773195813 +0000 UTC m=+66.480332036 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.779389 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.791092 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.797714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.797759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.797774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.797791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.797803 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.812167 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.825757 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.838379 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.853228 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.869001 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.900417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.900470 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.900481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.900499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.900511 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:41Z","lastTransitionTime":"2026-01-26T17:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.952123 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/2.log" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.952708 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/1.log" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.955646 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5" exitCode=1 Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.955693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5"} Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.955738 4787 scope.go:117] "RemoveContainer" containerID="c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.956495 4787 scope.go:117] "RemoveContainer" containerID="bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5" Jan 26 17:44:41 crc kubenswrapper[4787]: E0126 17:44:41.956706 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.975772 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:41 crc kubenswrapper[4787]: I0126 17:44:41.995477 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:41Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.003331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.003456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.003469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.003486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.003498 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.008197 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.021972 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.033208 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.047565 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.058619 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.070176 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.080999 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.091401 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.106235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.106287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.106299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.106316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.106333 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.111040 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.124120 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.137314 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.155337 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.170037 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.181743 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.200227 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:42Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.209357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.209418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.209428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.209464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.209476 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.312960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.313003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.313013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.313028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.313039 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.379768 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.379977 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:45:14.37993798 +0000 UTC m=+83.087074113 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.380025 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.380165 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.380238 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:45:14.380219447 +0000 UTC m=+83.087355600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.415480 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.415514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.415523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.415535 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.415544 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.481214 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.481282 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.481325 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481397 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481422 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481441 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481487 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481568 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481504 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:45:14.48148627 +0000 UTC m=+83.188622423 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481603 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481622 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481635 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:45:14.481609123 +0000 UTC m=+83.188745296 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.481694 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:45:14.481674645 +0000 UTC m=+83.188810818 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.517724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.517765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.517776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.517791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.517804 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.565643 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:56:05.297834288 +0000 UTC Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.588489 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.588489 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.588621 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.588656 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.588717 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:42 crc kubenswrapper[4787]: E0126 17:44:42.588877 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.619815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.619857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.619871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.619889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.619901 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.723162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.723237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.723260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.723288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.723311 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.825778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.825842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.825861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.825884 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.825903 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.929332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.929415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.929436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.929461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.929480 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:42Z","lastTransitionTime":"2026-01-26T17:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:42 crc kubenswrapper[4787]: I0126 17:44:42.961228 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/2.log" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.033080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.033150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.033168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.033195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.033217 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.136457 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.136523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.136537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.136559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.136572 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.239797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.239889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.239908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.239932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.239994 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.343279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.343345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.343367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.343397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.343418 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.446825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.446903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.446922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.446985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.447004 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.550570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.550655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.550677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.550706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.550731 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.566296 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:56:19.904681655 +0000 UTC Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.589198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.589456 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.637633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.637706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.637729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.637758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.637787 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.656798 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.662741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.662835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.662863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.662895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.662920 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.684728 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.690435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.690487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.690501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.690522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.690536 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.708397 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.719427 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.719523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.719550 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.719591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.719608 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.738814 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.743774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.743813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.743824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.743839 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.743851 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.757330 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:43 crc kubenswrapper[4787]: E0126 17:44:43.757508 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.759370 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.759405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.759420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.759443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.759460 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.862006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.862073 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.862092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.862119 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.862140 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.965006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.965047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.965058 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.965080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:43 crc kubenswrapper[4787]: I0126 17:44:43.965089 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:43Z","lastTransitionTime":"2026-01-26T17:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.068438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.068476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.068486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.068505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.068517 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.171300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.171360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.171373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.171392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.171407 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.274260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.274308 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.274321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.274336 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.274348 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.377799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.377846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.377858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.377878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.377890 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.480638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.480694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.480703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.480723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.480734 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.566840 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:27:57.944602144 +0000 UTC Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.583500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.583558 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.583575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.583596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.583609 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.589091 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.589202 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.589387 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:44 crc kubenswrapper[4787]: E0126 17:44:44.589584 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:44 crc kubenswrapper[4787]: E0126 17:44:44.589384 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:44 crc kubenswrapper[4787]: E0126 17:44:44.589817 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.689786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.689849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.689871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.689900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.689925 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.792689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.792745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.792761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.792785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.792803 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.896424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.896483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.896492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.896506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.896515 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.998914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.999029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.999054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.999084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:44 crc kubenswrapper[4787]: I0126 17:44:44.999109 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:44Z","lastTransitionTime":"2026-01-26T17:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.101472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.101512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.101520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.101532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.101541 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.204621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.204698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.204722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.204752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.204776 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.307276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.307353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.307379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.307410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.307435 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.409830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.409889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.409898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.409913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.409923 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.513311 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.513403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.513416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.513442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.513458 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.567261 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:13:02.472257499 +0000 UTC Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.588898 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:45 crc kubenswrapper[4787]: E0126 17:44:45.589282 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.616103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.616151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.616165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.616184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.616199 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.719792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.719847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.719862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.719882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.719897 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.822796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.822859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.822871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.822891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.822906 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.925733 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.925798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.925807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.925821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:45 crc kubenswrapper[4787]: I0126 17:44:45.925831 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:45Z","lastTransitionTime":"2026-01-26T17:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.028051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.028113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.028122 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.028139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.028149 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.131392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.131450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.131466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.131492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.131510 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.233869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.233916 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.233931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.233975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.233992 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.343816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.343882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.343900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.343924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.343941 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.446369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.446412 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.446427 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.446444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.446457 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.549365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.549418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.549430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.549446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.549459 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.568170 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:24:40.819875625 +0000 UTC Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.588507 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.588613 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.588507 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:46 crc kubenswrapper[4787]: E0126 17:44:46.588767 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:46 crc kubenswrapper[4787]: E0126 17:44:46.588640 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:46 crc kubenswrapper[4787]: E0126 17:44:46.588981 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.651987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.652248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.652309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.652375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.652475 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.754910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.754978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.754995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.755014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.755026 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.776277 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.785864 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.792664 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.809125 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.819126 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.836288 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.847883 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.857617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.857668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.857681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.857702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.857719 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.863045 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.878859 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.895747 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.909662 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.926086 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.936925 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.952101 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.960289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.960447 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.960567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.960695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.960776 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:46Z","lastTransitionTime":"2026-01-26T17:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.963860 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:46 crc kubenswrapper[4787]: I0126 17:44:46.978496 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.009843 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:47Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.022244 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:47Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.035165 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:47Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.062841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.063139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.063224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.063297 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.063386 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.166675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.166706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.166714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.166727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.166737 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.270136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.270207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.270232 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.270260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.270278 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.373314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.373365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.373379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.373398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.373412 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.476069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.476134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.476152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.476175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.476193 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.569253 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:00:43.586496763 +0000 UTC Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.579563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.579635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.579659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.579687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.579707 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.588390 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:47 crc kubenswrapper[4787]: E0126 17:44:47.588595 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.682620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.682660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.682671 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.682686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.682697 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.785248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.785304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.785350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.785371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.785387 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.888245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.888329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.888354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.888387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.888405 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.990566 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.990642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.990659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.990683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:47 crc kubenswrapper[4787]: I0126 17:44:47.990701 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:47Z","lastTransitionTime":"2026-01-26T17:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.093804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.093835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.093845 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.093859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.093871 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.196510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.196551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.196562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.196577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.196589 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.299809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.299911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.299928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.299976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.300002 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.403201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.403270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.403286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.403310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.403332 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.506387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.506438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.506451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.506468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.506484 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.570896 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:01:50.632800696 +0000 UTC Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.589286 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.589318 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.589470 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:48 crc kubenswrapper[4787]: E0126 17:44:48.589684 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:48 crc kubenswrapper[4787]: E0126 17:44:48.589869 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:48 crc kubenswrapper[4787]: E0126 17:44:48.590112 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.608937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.609020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.609037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.609059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.609078 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.714331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.714410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.714451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.714471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.714485 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.817248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.817292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.817302 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.817318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.817329 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.919790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.919841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.919856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.919877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:48 crc kubenswrapper[4787]: I0126 17:44:48.919893 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:48Z","lastTransitionTime":"2026-01-26T17:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.022465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.022540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.022570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.022600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.022627 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.125261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.125328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.125351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.125382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.125406 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.228387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.228436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.228445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.228459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.228470 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.331569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.331635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.331658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.331689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.331711 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.434907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.435016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.435041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.435073 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.435095 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.538344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.538416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.538450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.538479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.538501 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.571732 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:39:28.898465693 +0000 UTC Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.588462 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:49 crc kubenswrapper[4787]: E0126 17:44:49.588698 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.642207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.642266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.642284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.642309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.642327 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.745200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.745280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.745294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.745310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.745321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.848064 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.848238 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.848268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.848299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.848321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.951886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.951991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.952019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.952049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:49 crc kubenswrapper[4787]: I0126 17:44:49.952072 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:49Z","lastTransitionTime":"2026-01-26T17:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.055395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.055452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.055468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.055496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.055511 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.158933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.159063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.159083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.159108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.159127 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.261190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.261240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.261258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.261280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.261336 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.363610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.363682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.363701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.363724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.363740 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.467152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.467234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.467251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.467273 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.467290 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.569827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.569862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.569872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.569885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.569895 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.572203 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:44:16.243190697 +0000 UTC Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.588585 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.588585 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:50 crc kubenswrapper[4787]: E0126 17:44:50.588709 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.588609 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:50 crc kubenswrapper[4787]: E0126 17:44:50.588854 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:50 crc kubenswrapper[4787]: E0126 17:44:50.588906 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.672202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.672252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.672264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.672281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.672292 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.775444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.775773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.775986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.776173 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.776321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.879016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.879055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.879067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.879085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.879097 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.981657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.981718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.981738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.981761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:50 crc kubenswrapper[4787]: I0126 17:44:50.981779 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:50Z","lastTransitionTime":"2026-01-26T17:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.084254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.084292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.084300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.084319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.084330 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.187191 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.187239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.187251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.187267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.187278 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.289524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.289576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.289587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.289602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.289611 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.392758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.393112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.393250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.393424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.393568 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.496467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.496516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.496527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.496546 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.496559 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.572597 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:08:49.09296952 +0000 UTC Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.588877 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:51 crc kubenswrapper[4787]: E0126 17:44:51.589035 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.598741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.598791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.598807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.598827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.598843 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.620990 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.633521 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.645238 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.654483 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.663441 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.674152 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.685072 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.700372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.700433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.700450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.700478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.700495 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.707151 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4eb6df562cbd9b93908fc78c5c125ff3d466ce2427fe9ebe0204b4d1f9ea55c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"message\\\":\\\"I0126 17:44:23.637368 6207 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637426 6207 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637455 6207 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637507 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637573 6207 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:23.637655 6207 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:23.637512 6207 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:44:23.637548 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.721440 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.735419 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.748687 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.764356 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.780737 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.797483 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.802778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.802828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.802836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.802851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.802861 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.809556 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.820313 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.834974 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.847771 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:51Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.905031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.905386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.905509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.905626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:51 crc kubenswrapper[4787]: I0126 17:44:51.905729 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:51Z","lastTransitionTime":"2026-01-26T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.008135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.008185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.008196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.008217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.008229 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.110901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.111019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.111045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.111077 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.111101 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.214090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.214444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.214561 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.214667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.214756 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.317539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.317570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.317580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.317593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.317605 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.420150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.420193 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.420203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.420220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.420232 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.523556 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.523601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.523611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.523626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.523637 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.574482 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:05:24.268911299 +0000 UTC Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.588998 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:52 crc kubenswrapper[4787]: E0126 17:44:52.589144 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.589342 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:52 crc kubenswrapper[4787]: E0126 17:44:52.589445 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.589584 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:52 crc kubenswrapper[4787]: E0126 17:44:52.589760 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.626969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.627028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.627039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.627056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.627070 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.729757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.729802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.729812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.729829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.729841 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.833178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.833230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.833241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.833259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.833273 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.936681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.936755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.936769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.936789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:52 crc kubenswrapper[4787]: I0126 17:44:52.936801 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:52Z","lastTransitionTime":"2026-01-26T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.039324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.039415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.039426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.039441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.039450 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.142359 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.142402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.142414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.142430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.142441 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.244839 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.244894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.244906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.244923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.244936 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.347419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.347474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.347487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.347507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.347520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.450438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.450510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.450523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.450545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.450559 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.553829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.553877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.553888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.553905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.553917 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.575600 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:02:56.875129238 +0000 UTC Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.589399 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:53 crc kubenswrapper[4787]: E0126 17:44:53.589550 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.590707 4787 scope.go:117] "RemoveContainer" containerID="bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5" Jan 26 17:44:53 crc kubenswrapper[4787]: E0126 17:44:53.591006 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.621405 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.641045 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.657532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.657608 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.657631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.657659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.657619 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.657682 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.673753 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.689327 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.709434 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.730753 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.751737 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.761420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.761479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.761493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.761515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.761530 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.768977 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.789716 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.805016 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.820872 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.842231 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.864567 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.865052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.865079 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.865093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.865112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.865128 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.875993 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.887547 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.901645 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.918224 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.968658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.969212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.969271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.969303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:53 crc kubenswrapper[4787]: I0126 17:44:53.969331 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:53Z","lastTransitionTime":"2026-01-26T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.036548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.036583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.036591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.036606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.036615 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.055906 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:54Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.061629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.061685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.061701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.061726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.061742 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.080249 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:54Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.085426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.085484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.085499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.085519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.085535 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.101251 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:54Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.106126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.106221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.106245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.106277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.106298 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.122841 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:54Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.128751 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.128798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.128808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.128829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.128844 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.145422 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:44:54Z is after 2025-08-24T17:21:41Z" Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.145581 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.148304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.148346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.148357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.148374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.148388 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.252214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.252289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.252316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.252349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.252371 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.354832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.354914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.354939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.355015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.355041 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.458459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.458536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.458557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.458584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.458602 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.561123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.561197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.561210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.561226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.561261 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.576046 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:18:28.216222563 +0000 UTC Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.588539 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.588559 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.588746 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.589016 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.589115 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:54 crc kubenswrapper[4787]: E0126 17:44:54.589170 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.663539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.663603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.663619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.663642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.663660 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.766187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.766247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.766260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.766278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.766296 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.868903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.869188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.869210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.869231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.869243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.973041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.973137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.973161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.973217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:54 crc kubenswrapper[4787]: I0126 17:44:54.973234 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:54Z","lastTransitionTime":"2026-01-26T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.076301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.076340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.076350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.076364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.076374 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.180600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.180676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.180696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.180725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.180747 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.283289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.283423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.283449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.283476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.283495 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.386792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.387143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.387233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.387325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.387407 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.489862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.489913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.489924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.489939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.489979 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.576851 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:53:02.261158084 +0000 UTC Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.589393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:55 crc kubenswrapper[4787]: E0126 17:44:55.589636 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.592739 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.592775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.592809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.592833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.592847 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.695515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.695833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.695943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.696085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.696187 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.799208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.799456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.799648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.800266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.800430 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.902842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.903140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.903211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.903277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:55 crc kubenswrapper[4787]: I0126 17:44:55.903345 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:55Z","lastTransitionTime":"2026-01-26T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.005222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.005310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.005352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.005370 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.005383 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.108311 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.108375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.108395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.108421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.108439 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.210788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.210818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.210826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.210838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.210848 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.313317 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.313363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.313374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.313389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.313403 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.416115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.416181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.416230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.416253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.416269 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.518146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.518191 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.518202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.518218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.518231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.577888 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:40:45.25967722 +0000 UTC Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.589231 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.589274 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:56 crc kubenswrapper[4787]: E0126 17:44:56.589357 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.589231 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:56 crc kubenswrapper[4787]: E0126 17:44:56.589459 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:56 crc kubenswrapper[4787]: E0126 17:44:56.589523 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.620818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.620882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.620895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.620913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.620925 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.723288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.723357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.723373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.723395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.723411 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.827016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.827076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.827093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.827114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.827129 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.930038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.930102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.930121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.930146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:56 crc kubenswrapper[4787]: I0126 17:44:56.930164 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:56Z","lastTransitionTime":"2026-01-26T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.033096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.033151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.033165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.033183 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.033197 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.136397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.136471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.136488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.136514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.136529 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.239157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.239199 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.239211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.239226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.239236 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.341435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.341525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.341552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.341584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.341609 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.443654 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.443698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.443716 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.443733 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.443746 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.546417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.546461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.546471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.546488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.546500 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.578968 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:21:44.005508699 +0000 UTC Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.588326 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:57 crc kubenswrapper[4787]: E0126 17:44:57.588523 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.649177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.649233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.649243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.649262 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.649277 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.752143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.752195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.752206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.752223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.752234 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.844416 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:57 crc kubenswrapper[4787]: E0126 17:44:57.844635 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:57 crc kubenswrapper[4787]: E0126 17:44:57.844732 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:45:29.844708341 +0000 UTC m=+98.551844564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.854622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.854658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.854670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.854691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.854704 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.956987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.957015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.957023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.957036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:57 crc kubenswrapper[4787]: I0126 17:44:57.957046 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:57Z","lastTransitionTime":"2026-01-26T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.059713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.059759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.059884 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.060148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.060177 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.162508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.162594 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.162608 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.162625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.162639 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.265518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.265579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.265592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.265608 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.265619 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.367995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.368033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.368041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.368055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.368064 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.470305 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.470350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.470360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.470375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.470384 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.572585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.572673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.572704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.572736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.572759 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.579777 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:17:25.06443257 +0000 UTC Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.589154 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.589244 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.589257 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:44:58 crc kubenswrapper[4787]: E0126 17:44:58.589348 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:44:58 crc kubenswrapper[4787]: E0126 17:44:58.589518 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:44:58 crc kubenswrapper[4787]: E0126 17:44:58.589646 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.675703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.675759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.675777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.675800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.675817 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.779347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.779386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.779399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.779416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.779428 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.882866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.882921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.882936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.882980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.882994 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.986027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.986137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.986156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.986183 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:58 crc kubenswrapper[4787]: I0126 17:44:58.986218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:58Z","lastTransitionTime":"2026-01-26T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.088209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.088255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.088265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.088280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.088291 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.190868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.190926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.190938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.190976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.190991 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.293872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.293915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.293925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.293962 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.293980 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.396517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.396557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.396570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.396589 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.396602 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.499700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.499745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.499756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.499771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.499782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.579912 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:19:34.712121051 +0000 UTC Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.589337 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:44:59 crc kubenswrapper[4787]: E0126 17:44:59.589514 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.601661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.601701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.601709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.601739 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.601751 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.704331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.704379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.704390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.704409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.704424 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.806688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.806735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.806745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.806760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.806770 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.909395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.909440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.909453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.909471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:44:59 crc kubenswrapper[4787]: I0126 17:44:59.909483 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:44:59Z","lastTransitionTime":"2026-01-26T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.012554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.012646 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.012666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.012699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.012721 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.117799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.117863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.117873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.117889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.117899 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.220620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.220657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.220666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.220679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.220688 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.322853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.322904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.322921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.322941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.322975 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.429598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.429659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.429676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.429696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.429713 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.532196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.532261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.532274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.532294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.532311 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.580439 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:44:17.134474374 +0000 UTC Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.588755 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.588827 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:00 crc kubenswrapper[4787]: E0126 17:45:00.588878 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.588775 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:00 crc kubenswrapper[4787]: E0126 17:45:00.589067 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:00 crc kubenswrapper[4787]: E0126 17:45:00.589188 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.634894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.634935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.634984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.635005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.635021 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.737329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.737370 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.737381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.737396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.737408 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.839874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.839908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.839917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.839930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.839940 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.943467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.943525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.943537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.943557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:00 crc kubenswrapper[4787]: I0126 17:45:00.943570 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:00Z","lastTransitionTime":"2026-01-26T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.024671 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/0.log" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.024729 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2e50ad1-82f9-48f0-a103-6d584a3fa02e" containerID="6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648" exitCode=1 Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.024760 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerDied","Data":"6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.025175 4787 scope.go:117] "RemoveContainer" containerID="6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.044620 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.046004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.046029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.046038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.046054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.046063 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.057935 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.075896 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.086404 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.095477 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.105990 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.116107 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.134634 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.145604 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.148989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.149023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.149036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.149053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.149066 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.158518 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.169221 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.183160 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.197681 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.211427 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.228469 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.243155 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.250738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.250781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.250790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.250806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.250815 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.259930 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.276471 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.353588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.353632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.353644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.353663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.353676 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.456871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.456900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.456911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.456927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.456940 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.559331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.559377 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.559388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.559402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.559412 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.580597 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:37:06.959950471 +0000 UTC Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.589059 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:01 crc kubenswrapper[4787]: E0126 17:45:01.589172 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.601831 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.615886 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.630435 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.654396 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.662453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.662492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.662501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.662516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.662526 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.677466 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.696862 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.708919 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.719321 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.730657 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.746354 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.766076 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.766928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.766971 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.766986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.767002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.767012 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.777154 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.811364 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.831295 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.843277 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.855564 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.864342 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.868981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.869014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.869025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.869043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.869056 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.883263 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:01Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.971714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.971757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.971773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.971795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:01 crc kubenswrapper[4787]: I0126 17:45:01.971811 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:01Z","lastTransitionTime":"2026-01-26T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.031903 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/0.log" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.031987 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerStarted","Data":"e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.050711 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.062440 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.073547 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.074406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.074433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.074441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.074460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.074471 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.083804 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.094077 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.105827 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.119376 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.130069 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.150364 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.165827 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.176966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.176991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.177000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.177012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.177021 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.178664 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.190990 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.203057 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.218424 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.234264 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.244739 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.258910 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.274469 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:02Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.279638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.279680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.279692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.279712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.279724 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.383456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.383500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.383510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.383525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.383536 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.486110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.486151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.486162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.486180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.486194 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.581415 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:38:32.936818264 +0000 UTC Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588380 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588447 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588484 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.588388 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:02 crc kubenswrapper[4787]: E0126 17:45:02.588515 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:02 crc kubenswrapper[4787]: E0126 17:45:02.588620 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:02 crc kubenswrapper[4787]: E0126 17:45:02.588682 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.691133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.691188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.691199 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.691217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.691230 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.793804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.794077 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.794155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.794229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.794330 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.897013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.897070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.897080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.897095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.897105 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.999187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.999456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.999571 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.999659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:02 crc kubenswrapper[4787]: I0126 17:45:02.999742 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:02Z","lastTransitionTime":"2026-01-26T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.102753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.102806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.102816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.102833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.102845 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.204686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.204717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.204726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.204738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.204747 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.307110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.307156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.307172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.307192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.307208 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.409562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.409607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.409616 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.409631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.409641 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.513220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.513255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.513265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.513280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.513293 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.581783 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:37:14.32469147 +0000 UTC Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.589222 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:03 crc kubenswrapper[4787]: E0126 17:45:03.589382 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.615787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.615848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.615857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.615875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.615887 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.720473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.720521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.720534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.720585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.720612 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.823271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.823323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.823334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.823349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.823363 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.926422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.926474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.926487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.926508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:03 crc kubenswrapper[4787]: I0126 17:45:03.926525 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:03Z","lastTransitionTime":"2026-01-26T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.028816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.028851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.028861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.028877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.028888 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.131568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.131617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.131629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.131643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.131653 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.234071 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.234106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.234115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.234129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.234139 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.336519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.336563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.336575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.336591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.336604 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.439213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.439259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.439268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.439286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.439297 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.441485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.441532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.441543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.441559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.441571 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.462255 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:04Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.467255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.467319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.467331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.467348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.467378 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.482146 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:04Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.486475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.486534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.486544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.486559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.486570 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.498797 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:04Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.503184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.503251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.503263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.503281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.503294 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.522926 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:04Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.526740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.526780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.526789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.526803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.526814 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.544983 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:04Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.545086 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.546924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.546986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.546995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.547007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.547016 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.582499 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:14:06.371275507 +0000 UTC Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.588806 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.588831 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.588811 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.588928 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.589136 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:04 crc kubenswrapper[4787]: E0126 17:45:04.589244 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.649662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.649695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.649706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.649723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.649733 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.752060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.752102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.752110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.752127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.752137 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.855179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.855212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.855221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.855236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.855247 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.958307 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.958376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.958387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.958405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:04 crc kubenswrapper[4787]: I0126 17:45:04.958419 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:04Z","lastTransitionTime":"2026-01-26T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.060787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.060824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.060834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.060848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.060859 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.163313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.163356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.163366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.163380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.163390 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.265027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.265241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.265346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.265443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.265524 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.368506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.368800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.368888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.369004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.369102 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.471860 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.472244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.472423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.472612 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.472801 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.577078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.577142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.577156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.577178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.577193 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.583446 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:39:03.208413406 +0000 UTC Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.588885 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:05 crc kubenswrapper[4787]: E0126 17:45:05.589548 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.590062 4787 scope.go:117] "RemoveContainer" containerID="bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.680275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.680321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.680332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.680348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.680361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.782259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.782280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.782288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.782301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.782310 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.885095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.885152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.885164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.885188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.885201 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.990666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.990732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.990746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.990765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:05 crc kubenswrapper[4787]: I0126 17:45:05.990784 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:05Z","lastTransitionTime":"2026-01-26T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.045559 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/2.log" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.047553 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.049605 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.063644 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.085914 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.093616 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.093658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.093668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.093684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.093699 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.105084 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.125284 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.142872 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.158310 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.176168 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.196182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.196218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.196227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.196240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.196251 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.203217 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.214984 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.225828 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.235767 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.246992 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.258389 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.268577 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.284703 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.299009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.299072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.299086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.299102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.299112 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.300717 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.314113 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.327444 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.401631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.401700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.401721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.401749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.401772 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.503411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.503444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.503453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.503465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.503473 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.584385 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:03:03.87975655 +0000 UTC Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.592792 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.592806 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:06 crc kubenswrapper[4787]: E0126 17:45:06.592998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.592828 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:06 crc kubenswrapper[4787]: E0126 17:45:06.593151 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:06 crc kubenswrapper[4787]: E0126 17:45:06.593257 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.606484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.606518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.606532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.606559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.606574 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.709393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.709453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.709470 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.709493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.709510 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.811761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.811797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.811805 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.811818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.811830 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.913967 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.914041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.914051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.914067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:06 crc kubenswrapper[4787]: I0126 17:45:06.914078 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:06Z","lastTransitionTime":"2026-01-26T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.016625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.016653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.016661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.016673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.016682 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.118781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.118852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.118868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.118891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.118923 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.222572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.222641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.222658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.222684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.222704 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.325155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.325224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.325247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.325276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.325299 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.428685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.428735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.428744 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.428760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.428770 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.532886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.532985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.533010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.533038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.533061 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.587490 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:18:22.665721583 +0000 UTC Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.588913 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:07 crc kubenswrapper[4787]: E0126 17:45:07.589184 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.636415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.636456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.636467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.636482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.636494 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.739212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.739247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.739256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.739270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.739280 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.842585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.842645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.842661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.842683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.842701 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.944740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.944788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.944797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.944814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:07 crc kubenswrapper[4787]: I0126 17:45:07.944824 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:07Z","lastTransitionTime":"2026-01-26T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.046795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.046834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.046842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.046857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.046865 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.055722 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/3.log" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.056437 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/2.log" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.058928 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" exitCode=1 Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.058981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.059031 4787 scope.go:117] "RemoveContainer" containerID="bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.060392 4787 scope.go:117] "RemoveContainer" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" Jan 26 17:45:08 crc kubenswrapper[4787]: E0126 17:45:08.060675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.079437 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.092995 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.108411 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.118474 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.171831 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.172517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.172540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.172548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.172560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.172568 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.184839 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.195293 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.205136 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.227025 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.237158 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.249377 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.261397 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.274181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.274236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.274245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.274260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.274291 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.275114 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.285030 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.305339 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:07Z\\\",\\\"message\\\":\\\"tor.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:45:06.523635 6793 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523641 6793 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523660 6793 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:45:06.523075 6793 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.524475 6793 factory.go:656] Stopping watch factory\\\\nI0126 17:45:06.529817 6793 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:45:06.529840 6793 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:45:06.529890 6793 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:45:06.529919 6793 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:45:06.530019 6793 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.322141 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.333926 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.346113 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:08Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.377734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.377787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.377799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.377815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.377825 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.480645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.480697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.480708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.480722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.480731 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.583482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.583547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.583565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.583589 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.583606 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.588847 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.588857 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:08 crc kubenswrapper[4787]: E0126 17:45:08.589079 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.588883 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:07:48.86403935 +0000 UTC Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.588885 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:08 crc kubenswrapper[4787]: E0126 17:45:08.589156 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:08 crc kubenswrapper[4787]: E0126 17:45:08.589568 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.686579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.686651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.686673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.686703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.686728 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.788718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.788762 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.788793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.788810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.788824 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.892794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.892854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.892878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.892921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.893015 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.995160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.995235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.995250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.995268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:08 crc kubenswrapper[4787]: I0126 17:45:08.995280 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:08Z","lastTransitionTime":"2026-01-26T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.063314 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/3.log" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.098711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.098778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.098793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.098811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.098825 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.201903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.202003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.202016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.202070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.202083 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.304706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.304773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.304790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.304815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.304834 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.408177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.408284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.408297 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.408320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.408341 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.512419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.512469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.512477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.512496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.512507 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.588696 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:09 crc kubenswrapper[4787]: E0126 17:45:09.589137 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.589202 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:22:14.48670623 +0000 UTC Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.615596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.615908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.616035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.616135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.616219 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.718754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.719086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.719170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.719253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.719366 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.822490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.822520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.822528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.822540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.822549 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.925510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.925583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.925607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.925636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:09 crc kubenswrapper[4787]: I0126 17:45:09.925658 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:09Z","lastTransitionTime":"2026-01-26T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.028525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.028599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.028621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.028653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.028675 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.131444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.131568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.131578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.131590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.131601 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.234484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.234528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.234537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.234550 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.234561 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.337260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.337321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.337337 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.337359 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.337378 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.440447 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.440502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.440514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.440533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.440546 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.542890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.542979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.542997 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.543024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.543041 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.588440 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.588544 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.588444 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:10 crc kubenswrapper[4787]: E0126 17:45:10.588573 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:10 crc kubenswrapper[4787]: E0126 17:45:10.588758 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:10 crc kubenswrapper[4787]: E0126 17:45:10.588922 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.589579 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:27:03.193339903 +0000 UTC Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.647796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.647859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.647871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.647888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.647901 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.750743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.750779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.750793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.750807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.750817 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.854160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.854235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.854258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.854292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.854313 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.957746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.957793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.957804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.957822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:10 crc kubenswrapper[4787]: I0126 17:45:10.957835 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:10Z","lastTransitionTime":"2026-01-26T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.060509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.060552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.060562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.060581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.060593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.162734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.162798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.162819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.162847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.162869 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.264557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.264603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.264614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.264629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.264640 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.367046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.367083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.367094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.367109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.367118 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.473267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.473485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.473503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.473551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.473568 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.576747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.576810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.576821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.576836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.576846 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.588566 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:11 crc kubenswrapper[4787]: E0126 17:45:11.588751 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.590599 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:45:48.715757706 +0000 UTC Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.605057 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.623727 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.640080 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.674581 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.680223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.680281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.680288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.680300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.680309 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.692197 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.712598 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.731981 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.750076 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.763464 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.783367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.783424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.783438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.783461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.783476 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.790038 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb53649ee51f8b4bf8e9990989b7c2c3e5d5a3d0c94d0d29fed3e97c867bb6d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:44:40Z\\\",\\\"message\\\":\\\"640357 6417 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:44:40.640481 6417 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.640794 6417 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 17:44:40.646919 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 17:44:40.647020 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:44:40.647225 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 17:44:40.647297 6417 factory.go:656] Stopping watch factory\\\\nI0126 17:44:40.647328 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 17:44:40.651326 6417 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:44:40.651368 6417 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:44:40.651412 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:44:40.651437 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:44:40.651488 6417 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:07Z\\\",\\\"message\\\":\\\"tor.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:45:06.523635 6793 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523641 6793 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523660 6793 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:45:06.523075 6793 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.524475 6793 factory.go:656] Stopping watch factory\\\\nI0126 17:45:06.529817 6793 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:45:06.529840 6793 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:45:06.529890 6793 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:45:06.529919 6793 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:45:06.530019 6793 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.806127 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.825918 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.843621 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.857892 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.872087 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.886611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.886643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.886651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.886664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.886673 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.893782 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.908061 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.922352 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:11Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.988742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.988788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.988800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.988837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:11 crc kubenswrapper[4787]: I0126 17:45:11.988847 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:11Z","lastTransitionTime":"2026-01-26T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.092278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.092313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.092331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.092347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.092363 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.195400 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.195468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.195486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.195512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.195531 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.298565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.299005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.299019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.299038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.299053 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.402312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.402390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.402405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.402458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.402472 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.505057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.505097 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.505106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.505120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.505130 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.589162 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.589238 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.589209 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:12 crc kubenswrapper[4787]: E0126 17:45:12.589363 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:12 crc kubenswrapper[4787]: E0126 17:45:12.589485 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:12 crc kubenswrapper[4787]: E0126 17:45:12.589574 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.591156 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 02:44:25.247125574 +0000 UTC Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.607645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.607691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.607699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.607712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.607721 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.710504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.710550 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.710561 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.710582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.710594 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.813678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.813728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.813739 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.813755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.813767 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.916541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.916577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.916588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.916602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:12 crc kubenswrapper[4787]: I0126 17:45:12.916615 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:12Z","lastTransitionTime":"2026-01-26T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.019434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.019470 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.019484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.019508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.019533 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.122072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.122141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.122164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.122195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.122221 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.225457 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.225500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.225516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.225538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.225556 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.327909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.327989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.328013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.328033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.328046 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.431032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.431120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.431144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.431219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.431240 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.533281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.533324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.533333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.533346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.533358 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.589490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:13 crc kubenswrapper[4787]: E0126 17:45:13.589746 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.591397 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:17:50.70511057 +0000 UTC Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.636205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.636262 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.636278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.636300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.636317 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.738656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.738691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.738701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.738713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.738722 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.840894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.840931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.840941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.840974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.840988 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.944867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.944914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.944928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.944968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:13 crc kubenswrapper[4787]: I0126 17:45:13.944982 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:13Z","lastTransitionTime":"2026-01-26T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.048020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.048145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.048206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.048239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.048303 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.151331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.151377 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.151388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.151416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.151430 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.254545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.254582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.254592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.254606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.254616 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.356650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.356721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.356743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.356772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.356792 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.419231 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.419451 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.419618 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:18.419582074 +0000 UTC m=+147.126718247 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.419745 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.419839 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:46:18.41981832 +0000 UTC m=+147.126954493 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.459465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.459515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.459532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.459554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.459571 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.520227 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.520321 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.520391 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520447 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520456 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520503 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520532 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520510 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:46:18.520491928 +0000 UTC m=+147.227628071 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520615 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520673 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520693 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520698 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:46:18.520648722 +0000 UTC m=+147.227784865 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.520771 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:46:18.520752264 +0000 UTC m=+147.227888437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.562901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.563002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.563019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.563565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.563643 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.588488 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.588518 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.588495 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.588635 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.588759 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.588832 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.591843 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:28:22.908552155 +0000 UTC Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.639561 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.639654 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.639678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.639705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.639724 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.661104 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.665413 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.665439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.665447 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.665459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.665467 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.679711 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.683554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.683587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.683600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.683613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.683623 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.701280 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.705494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.705531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.705543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.705559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.705570 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.720239 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.724393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.724440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.724451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.724468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.724480 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.741760 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:14 crc kubenswrapper[4787]: E0126 17:45:14.741939 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.743444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.743484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.743493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.743508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.743518 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.846826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.846886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.846905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.846926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.846970 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.950019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.950061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.950070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.950084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:14 crc kubenswrapper[4787]: I0126 17:45:14.950094 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:14Z","lastTransitionTime":"2026-01-26T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.052788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.052845 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.052879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.052915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.052938 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.156071 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.156148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.156186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.156218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.156240 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.258880 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.258921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.258931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.258981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.258995 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.362019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.362104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.362128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.362157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.362180 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.465275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.465355 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.465377 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.465406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.465427 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.569629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.569681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.569694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.569710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.569721 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.588808 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:15 crc kubenswrapper[4787]: E0126 17:45:15.588973 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.592926 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:42:27.308370435 +0000 UTC Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.673146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.673199 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.673209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.673221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.673230 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.775581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.775619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.775630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.775643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.775653 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.878034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.878081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.878091 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.878104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.878114 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.982037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.982095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.982113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.982138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:15 crc kubenswrapper[4787]: I0126 17:45:15.982160 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:15Z","lastTransitionTime":"2026-01-26T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.084808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.085044 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.085112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.085142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.085158 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.188652 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.188702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.191001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.191027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.191043 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.293538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.293603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.293615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.293634 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.293649 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.396069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.396103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.396111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.396125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.396134 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.499181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.499267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.499293 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.499326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.499350 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.589253 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.589404 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:16 crc kubenswrapper[4787]: E0126 17:45:16.589538 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.589613 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:16 crc kubenswrapper[4787]: E0126 17:45:16.590012 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:16 crc kubenswrapper[4787]: E0126 17:45:16.590219 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.593458 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:24:09.907326809 +0000 UTC Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.603348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.603398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.603408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.603425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.603435 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.706611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.706664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.706676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.706693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.706706 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.809732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.809796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.809810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.809832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.809846 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.913493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.913604 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.913618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.913643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:16 crc kubenswrapper[4787]: I0126 17:45:16.913661 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:16Z","lastTransitionTime":"2026-01-26T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.015920 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.016009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.016030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.016068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.016083 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.119568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.119938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.120005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.120031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.120048 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.222494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.222534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.222543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.222557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.222568 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.325333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.325375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.325390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.325409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.325421 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.429169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.429239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.429250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.429265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.429277 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.531644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.531703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.531723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.531747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.531764 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.588847 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:17 crc kubenswrapper[4787]: E0126 17:45:17.589154 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.593902 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:15:26.86090677 +0000 UTC Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.634623 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.634679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.634700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.634727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.634748 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.738420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.738509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.738526 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.738549 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.738574 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.840854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.840937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.841002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.841029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.841045 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.944164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.944209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.944217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.944231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:17 crc kubenswrapper[4787]: I0126 17:45:17.944239 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:17Z","lastTransitionTime":"2026-01-26T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.047577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.047638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.047658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.047680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.047696 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.150469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.150525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.150545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.150570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.150589 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.253384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.253459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.253482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.253510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.253530 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.356726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.356796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.356820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.356856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.356878 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.459434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.459468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.459476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.459489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.459498 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.562196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.562233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.562243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.562256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.562266 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.588305 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.588324 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:18 crc kubenswrapper[4787]: E0126 17:45:18.588418 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.588458 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:18 crc kubenswrapper[4787]: E0126 17:45:18.588599 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:18 crc kubenswrapper[4787]: E0126 17:45:18.588827 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.594788 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:58:41.051732983 +0000 UTC Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.665403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.665469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.665492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.665521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.665543 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.767821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.767890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.767915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.767980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.768008 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.870282 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.870335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.870351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.870372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.870385 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.973013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.973058 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.973069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.973085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:18 crc kubenswrapper[4787]: I0126 17:45:18.973096 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:18Z","lastTransitionTime":"2026-01-26T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.075861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.075934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.076000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.076033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.076056 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.178304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.178331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.178340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.178353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.178361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.280751 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.280792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.280803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.280817 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.280828 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.383663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.383722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.383742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.383767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.383783 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.486228 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.486292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.486309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.486331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.486348 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.588605 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:19 crc kubenswrapper[4787]: E0126 17:45:19.588844 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.589125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.589198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.589215 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.589239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.589260 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.590317 4787 scope.go:117] "RemoveContainer" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" Jan 26 17:45:19 crc kubenswrapper[4787]: E0126 17:45:19.590675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.595135 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:25:02.52609015 +0000 UTC Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.606051 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.617544 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.630090 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.643152 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.657360 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.679533 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.691795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.691825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.691835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.691850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.691860 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.691979 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.705433 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.719486 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.731506 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.744390 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.760560 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.784137 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:07Z\\\",\\\"message\\\":\\\"tor.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:45:06.523635 6793 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523641 6793 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523660 6793 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:45:06.523075 6793 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.524475 6793 factory.go:656] Stopping watch factory\\\\nI0126 17:45:06.529817 6793 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:45:06.529840 6793 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:45:06.529890 6793 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:45:06.529919 6793 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:45:06.530019 6793 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:45:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.794647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.794915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.794927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.794963 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.794978 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.800001 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.816566 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.830783 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.847158 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.866094 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:19Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.898019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.898067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.898075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.898092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:19 crc kubenswrapper[4787]: I0126 17:45:19.898103 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:19Z","lastTransitionTime":"2026-01-26T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.000533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.000573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.000581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.000595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.000604 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.102701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.102757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.102777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.102799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.102815 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.205167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.205219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.205242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.205267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.205284 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.312902 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.312972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.312986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.313015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.313028 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.416998 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.417069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.417082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.417107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.417122 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.520966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.521013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.521027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.521046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.521058 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.589122 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.589157 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.589234 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:20 crc kubenswrapper[4787]: E0126 17:45:20.589363 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:20 crc kubenswrapper[4787]: E0126 17:45:20.589534 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:20 crc kubenswrapper[4787]: E0126 17:45:20.589670 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.596099 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:22:55.19052704 +0000 UTC Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.624729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.624782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.624793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.624814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.624828 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.727718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.727782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.727801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.727820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.727833 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.831026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.831094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.831106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.831128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.831141 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.933782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.933882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.933907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.933939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:20 crc kubenswrapper[4787]: I0126 17:45:20.934002 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:20Z","lastTransitionTime":"2026-01-26T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.036766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.036819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.036828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.036863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.036874 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.139700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.139772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.139790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.139811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.139826 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.243267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.243337 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.243352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.243376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.243394 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.346453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.346501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.346516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.346533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.346545 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.450060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.450140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.450154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.450179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.450194 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.553566 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.553631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.553651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.553677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.553697 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.588803 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:21 crc kubenswrapper[4787]: E0126 17:45:21.590138 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.597054 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:12:19.267001737 +0000 UTC Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.605504 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.608889 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418f020a-c193-4323-a29a-59c3ad0f1d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c03ceabd18954a9a615166a8b1eca0de2aca0b176ba5cf6e605982ee4517f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gz5f4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6x4t8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.622370 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98cea2bc-3771-492e-8944-f87958ff034a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9ad32da6862d6dc8eaf0ae0d5248aac728c8ca721b4460da2fe200728d9d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aee78daaabc4a214d93667b49abdaaa6d95308466d0cc445ee7225b0c177c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96wdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b992n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.638050 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f50e69-15a4-4548-b06c-fd43808757ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50bb15d58e74b242e0d6c51fe7ab5f126525dab3e42e519a4d3f2c1ccac0ee28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d681c7d6a4d81e784dbc46d62ac85a1a2494d8518a0b58e37db2b4b5cb7b22ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5646bb0a80079e0292e226a56941575cb44ba454fbead79745afd303d721c9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5f789937f51f5fa81ec7f144182a4dd2033706dfd7a2b10b73f2ab549b61b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.657640 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8a40b5e-ab63-4d0a-ab55-7d507562b39c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccac48713d5791f7b44d5ef42a7e2d88b55883b81fe7214b29f8ab81c7594e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0e92a8c70ae29e3e30b893f44b88429e2cdd3ecbf0b3dc0e18a9e50180f639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34834edfc16551699c51b11d73349117754c07d53d24db08f4d69b9359179fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad943569f7e1f965307d6acf94edd3da8925f219e7be6a200e4722da55b2a165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95b5d64c7f8073dea6abadcea7ab40095cd0c529347ae30b97e4aabe97139ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb814501842616c0a2622e311cf129314c66233e18e30ba3a6e300190f0117a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309d0b0a356b354f6b3e312271d1fdc17d90aa090d747923a8b91891b9e498f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d3b5a0350f01fcadff6c499040d82ae10f63b89b076b02f154de11f3fd9976\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.658356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.658394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.658406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.658426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.658438 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.672759 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2aea977199938f9829e53f5c9621bb124ae059ba88167c7c6d4246bf6d90520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.691632 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086c3eb245a969673eee0b4f61d6b105db064ebf49156683952072a77e20158e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19d59ec60ee6324b1f92091169eb858f693e3cc6cac018bce4502247cdf9e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.708918 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d334843c-0fc8-4f2b-be0d-04f020ec3259\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:43:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.725142 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85f18c51dd0f4c69793b9d0e4054808425e245ad12e4b81e17ecba0e613d90e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.737447 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5jzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44f46132-8fcb-4066-9925-e9245a901928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce3d022e6ead6211596a02e8f6e5f8eda9da104e3a6d56f33e197cee609c4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6hlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5jzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.761429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.761486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.761496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.761512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.761525 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.767855 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474c6821-f8c5-400e-a584-0d63c13e0655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:07Z\\\",\\\"message\\\":\\\"tor.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 17:45:06.523635 6793 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523641 6793 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.523660 6793 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 17:45:06.523075 6793 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 17:45:06.524475 6793 factory.go:656] Stopping watch factory\\\\nI0126 17:45:06.529817 6793 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 17:45:06.529840 6793 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 17:45:06.529890 6793 ovnkube.go:599] Stopped ovnkube\\\\nI0126 17:45:06.529919 6793 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 17:45:06.530019 6793 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:45:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jdfz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cpbtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.789155 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1fedc8-eb4f-4b7d-a0c6-e9a4135a0b47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e1f0427eff28b55e4547bf836c1bbf0d209544d85d87b879327529dd38b8645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40939c6789983f820fb5fd8f05c3d892620d985fd77b291227bf5fe6532a9658\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bbb88e5eaa0b1b387cba4fe8d7ff142842952438078dceb92f2470fff1ae93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:43:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.807023 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.828516 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-65mpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e50ad1-82f9-48f0-a103-6d584a3fa02e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T17:45:00Z\\\",\\\"message\\\":\\\"2026-01-26T17:44:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8\\\\n2026-01-26T17:44:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6e756cc1-acec-4415-85db-d3fb163629d8 to /host/opt/cni/bin/\\\\n2026-01-26T17:44:14Z [verbose] multus-daemon started\\\\n2026-01-26T17:44:14Z [verbose] Readiness Indicator file check\\\\n2026-01-26T17:44:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrcqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-65mpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.845427 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.865076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.865201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.865214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.865234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.865262 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.868087 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.893442 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-slcv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec33f96-57e2-438c-83d4-943e0782ca1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff53f8386dda686788659b6f75453951f555d699ef49daa7efeb81f0efba366d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa914f0175324288412879942d011e0653a0650d6654de700878bf7accc57b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40d6f3ea6c616381f2b8933c57361efbe32694a02b67edd8b1e854c6cce18a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f6c4f7c197d2a6aada810afab2de29df4da2d71ff634df863257bd24e48f47f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9161c053f9d3db04a59ac8acefc3f8ddd5dba4fa25c8d5fb93123c0c8a1d068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e9bf6154cb38f5a6ba1dffdaedb07be0be1f2ea93e687becb705219206982c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf5256db5901fcb37baf7aea8f67d7b731f1641f829bc7fe7efaadb4c6d0ace8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T17:44:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T17:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpr2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-slcv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.905815 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tqrnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50235fa2-913c-4797-a4e3-6bf92f998335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e301336434ed353f1f72ce21fe752872ba940f5967b6b7402aff295da238656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T17:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj6cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tqrnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.925522 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04b2906-5567-4455-a1e8-5d85d5ea882e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T17:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvrhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T17:44:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vkdfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:21Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.968599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.968672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.968697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.968731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:21 crc kubenswrapper[4787]: I0126 17:45:21.968756 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:21Z","lastTransitionTime":"2026-01-26T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.072391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.072441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.072454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.072474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.072488 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.176216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.176445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.176473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.176505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.176523 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.279665 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.280371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.280517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.280617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.280709 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.384138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.384189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.384201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.384217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.384228 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.486914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.487427 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.487592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.487930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.488163 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.588646 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.588673 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.588773 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:22 crc kubenswrapper[4787]: E0126 17:45:22.589044 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:22 crc kubenswrapper[4787]: E0126 17:45:22.589234 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:22 crc kubenswrapper[4787]: E0126 17:45:22.589386 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.591107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.591167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.591189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.591216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.591237 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.597645 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:14:45.574704171 +0000 UTC Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.694163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.694478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.694616 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.694768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.694896 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.797399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.797442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.797450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.797464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.797473 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.899773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.899815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.899824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.899838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:22 crc kubenswrapper[4787]: I0126 17:45:22.899849 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:22Z","lastTransitionTime":"2026-01-26T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.001684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.001726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.001740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.001756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.001766 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.104439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.104495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.104512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.104570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.104586 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.206655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.206716 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.206909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.206935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.206964 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.309232 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.309272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.309280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.309295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.309305 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.412449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.412517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.412532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.412553 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.412570 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.515333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.515413 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.515441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.515479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.515515 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.589276 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:23 crc kubenswrapper[4787]: E0126 17:45:23.589542 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.597799 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:34:52.097379359 +0000 UTC Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.618109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.618177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.618197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.618220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.618237 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.721114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.721224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.721250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.721337 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.721364 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.824977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.825035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.825052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.825074 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.825091 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.928135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.928189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.928206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.928230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:23 crc kubenswrapper[4787]: I0126 17:45:23.928247 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:23Z","lastTransitionTime":"2026-01-26T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.031581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.032052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.032244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.032399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.032544 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.135030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.135315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.135404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.135526 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.135614 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.238430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.238474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.238485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.238499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.238508 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.341680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.341770 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.341801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.341835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.341854 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.448836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.449123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.449216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.449318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.449409 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.552065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.552432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.552570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.552706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.552830 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.589222 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.589236 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.589261 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:24 crc kubenswrapper[4787]: E0126 17:45:24.589881 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:24 crc kubenswrapper[4787]: E0126 17:45:24.590045 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:24 crc kubenswrapper[4787]: E0126 17:45:24.590252 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.598205 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:38:34.902615443 +0000 UTC Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.656047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.656334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.656426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.656513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.656579 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.759129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.759398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.759463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.759533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.759609 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.862772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.862801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.862813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.862827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.862838 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.942489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.942873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.943044 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.943196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.943315 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: E0126 17:45:24.962069 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.967132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.967186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.967203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.967228 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.967243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:24 crc kubenswrapper[4787]: E0126 17:45:24.984045 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:24Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.989926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.990001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.990015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.990033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:24 crc kubenswrapper[4787]: I0126 17:45:24.990046 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:24Z","lastTransitionTime":"2026-01-26T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: E0126 17:45:25.004784 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.009169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.009209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.009223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.009239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.009252 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: E0126 17:45:25.022596 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.026714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.026757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.026773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.026794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.026809 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: E0126 17:45:25.039931 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:45:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92906ba-5f63-4676-93ca-b9fd3c104d01\\\",\\\"systemUUID\\\":\\\"cb316aee-f977-43a3-b6ab-af2db7230a5b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:45:25Z is after 2025-08-24T17:21:41Z" Jan 26 17:45:25 crc kubenswrapper[4787]: E0126 17:45:25.040123 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.041587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.041637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.041647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.041663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.041675 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.143834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.143921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.143988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.144014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.144036 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.251227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.251302 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.251319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.251342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.251358 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.353898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.353936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.353960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.353975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.353986 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.456426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.456473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.456483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.456499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.456510 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.559055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.559113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.559123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.559140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.559151 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.588849 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:25 crc kubenswrapper[4787]: E0126 17:45:25.589198 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.599380 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:32:13.464938616 +0000 UTC Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.661221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.661265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.661277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.661292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.661302 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.764151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.764511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.764803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.765085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.765305 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.868714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.868764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.868775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.868796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.868805 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.972211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.972276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.972292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.972320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:25 crc kubenswrapper[4787]: I0126 17:45:25.972339 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:25Z","lastTransitionTime":"2026-01-26T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.075821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.075879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.075891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.075908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.075919 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.179425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.179505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.179521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.179545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.179561 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.282813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.282886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.282909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.282937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.282997 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.386009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.386049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.386059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.386072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.386081 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.489164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.489241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.489264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.489288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.489305 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.588632 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.588690 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.588633 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:26 crc kubenswrapper[4787]: E0126 17:45:26.588806 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:26 crc kubenswrapper[4787]: E0126 17:45:26.589044 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:26 crc kubenswrapper[4787]: E0126 17:45:26.589128 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.592781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.592856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.592868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.592896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.592912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.599898 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:40:45.830382879 +0000 UTC Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.695483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.695545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.695554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.695575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.695588 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.798687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.798741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.798753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.798774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.798789 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.903321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.903374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.903387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.903410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:26 crc kubenswrapper[4787]: I0126 17:45:26.903425 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:26Z","lastTransitionTime":"2026-01-26T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.005838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.005917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.005941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.006025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.006048 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.108907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.109036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.109064 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.109127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.109149 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.212460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.212545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.212560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.212585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.212602 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.316295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.316386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.316403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.316427 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.316446 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.419578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.419625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.419637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.419658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.419668 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.523305 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.523416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.523434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.523464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.523484 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.588318 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:27 crc kubenswrapper[4787]: E0126 17:45:27.588528 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.600148 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:23:18.945357531 +0000 UTC Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.625527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.625562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.625570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.625583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.625591 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.728647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.728987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.729113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.729258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.729397 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.832464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.832509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.832519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.832538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.832550 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.934978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.935252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.935368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.935502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:27 crc kubenswrapper[4787]: I0126 17:45:27.935699 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:27Z","lastTransitionTime":"2026-01-26T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.038173 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.038465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.038552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.038628 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.038714 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.141037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.141315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.141381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.141450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.141525 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.244105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.244146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.244157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.244171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.244203 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.346667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.346696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.346704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.346716 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.346726 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.449158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.449204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.449213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.449230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.449239 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.550809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.550867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.550885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.550903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.550914 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.588574 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.588686 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:28 crc kubenswrapper[4787]: E0126 17:45:28.588773 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.588838 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:28 crc kubenswrapper[4787]: E0126 17:45:28.588897 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:28 crc kubenswrapper[4787]: E0126 17:45:28.589096 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.601134 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:59:19.983530253 +0000 UTC Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.653274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.653325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.653335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.653357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.653371 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.756256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.756298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.756306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.756338 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.756348 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.858778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.858859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.858879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.858904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.858922 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.962039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.962086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.962098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.962116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:28 crc kubenswrapper[4787]: I0126 17:45:28.962127 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:28Z","lastTransitionTime":"2026-01-26T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.064524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.064570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.064581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.064597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.064610 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.167484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.167531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.167541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.167557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.167568 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.270435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.270475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.270493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.270514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.270527 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.373342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.373376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.373384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.373395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.373404 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.476053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.476134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.476147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.476169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.476183 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.579863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.579935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.580002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.580035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.580057 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.588417 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:29 crc kubenswrapper[4787]: E0126 17:45:29.588632 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.601890 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:48:19.037549743 +0000 UTC Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.682712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.682749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.682761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.682790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.682808 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.786157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.786256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.786279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.786306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.786322 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.888911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.888980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.888992 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.889009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.889021 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.889991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:29 crc kubenswrapper[4787]: E0126 17:45:29.890150 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:45:29 crc kubenswrapper[4787]: E0126 17:45:29.890232 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs podName:f04b2906-5567-4455-a1e8-5d85d5ea882e nodeName:}" failed. No retries permitted until 2026-01-26 17:46:33.890210828 +0000 UTC m=+162.597347041 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs") pod "network-metrics-daemon-vkdfd" (UID: "f04b2906-5567-4455-a1e8-5d85d5ea882e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.991349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.991450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.991462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.991504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:29 crc kubenswrapper[4787]: I0126 17:45:29.991518 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:29Z","lastTransitionTime":"2026-01-26T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.094861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.094910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.094922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.094937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.094966 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.197813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.197849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.197857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.197871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.197881 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.300533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.300570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.300580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.300597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.300606 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.403736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.403798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.403809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.403825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.403836 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.505819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.505867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.505902 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.505918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.505930 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.588401 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.588900 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.588933 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:30 crc kubenswrapper[4787]: E0126 17:45:30.589112 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:30 crc kubenswrapper[4787]: E0126 17:45:30.589484 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:30 crc kubenswrapper[4787]: E0126 17:45:30.589672 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.602034 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:31:05.83040391 +0000 UTC Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.609688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.610403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.610443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.610469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.610485 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.714224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.714255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.714264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.714277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.714286 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.817281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.817349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.817367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.817389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.817404 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.919755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.919821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.919834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.919853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:30 crc kubenswrapper[4787]: I0126 17:45:30.919865 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:30Z","lastTransitionTime":"2026-01-26T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.022660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.022710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.022724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.022743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.022758 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.125592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.125800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.125898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.125989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.126052 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.229235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.229619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.229823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.230126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.230403 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.336373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.336421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.336430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.336445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.336456 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.439707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.439748 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.439761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.439775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.439786 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.541532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.541576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.541587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.541601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.541614 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.589319 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:31 crc kubenswrapper[4787]: E0126 17:45:31.589998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.590440 4787 scope.go:117] "RemoveContainer" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" Jan 26 17:45:31 crc kubenswrapper[4787]: E0126 17:45:31.590779 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.602800 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:33:01.552275136 +0000 UTC Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.643284 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.64325918 podStartE2EDuration="1m22.64325918s" podCreationTimestamp="2026-01-26 17:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.622995572 +0000 UTC m=+100.330131715" watchObservedRunningTime="2026-01-26 17:45:31.64325918 +0000 UTC m=+100.350395313" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.643935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.644007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.644018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.644062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.644077 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.684859 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p5jzw" podStartSLOduration=80.684834555 podStartE2EDuration="1m20.684834555s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.659654052 +0000 UTC m=+100.366790185" watchObservedRunningTime="2026-01-26 17:45:31.684834555 +0000 UTC m=+100.391970688" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.714801 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.714778886 podStartE2EDuration="1m21.714778886s" podCreationTimestamp="2026-01-26 17:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.701145715 +0000 UTC m=+100.408281848" watchObservedRunningTime="2026-01-26 17:45:31.714778886 +0000 UTC m=+100.421915019" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.731998 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-65mpd" podStartSLOduration=80.731979469 podStartE2EDuration="1m20.731979469s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.7316529 +0000 UTC m=+100.438789033" watchObservedRunningTime="2026-01-26 17:45:31.731979469 +0000 UTC m=+100.439115602" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.746325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.746377 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.746390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.746408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.746422 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.790724 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-slcv9" podStartSLOduration=80.790705894 podStartE2EDuration="1m20.790705894s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.779546934 +0000 UTC m=+100.486683087" watchObservedRunningTime="2026-01-26 17:45:31.790705894 +0000 UTC m=+100.497842027" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.791144 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tqrnl" podStartSLOduration=80.791137744 podStartE2EDuration="1m20.791137744s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.790065087 +0000 UTC m=+100.497201220" watchObservedRunningTime="2026-01-26 17:45:31.791137744 +0000 UTC m=+100.498273877" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.818084 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podStartSLOduration=80.81806571 podStartE2EDuration="1m20.81806571s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.817408924 +0000 UTC m=+100.524545057" watchObservedRunningTime="2026-01-26 17:45:31.81806571 +0000 UTC m=+100.525201843" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.831416 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b992n" podStartSLOduration=79.831397266 podStartE2EDuration="1m19.831397266s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.831112299 +0000 UTC m=+100.538248432" watchObservedRunningTime="2026-01-26 17:45:31.831397266 +0000 UTC m=+100.538533399" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.843368 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.843350025 podStartE2EDuration="45.843350025s" podCreationTimestamp="2026-01-26 17:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.843193001 +0000 UTC m=+100.550329144" watchObservedRunningTime="2026-01-26 17:45:31.843350025 +0000 UTC m=+100.550486158" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.847992 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.848027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.848039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.848055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.848067 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.897398 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=79.897380242 podStartE2EDuration="1m19.897380242s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.895264299 +0000 UTC m=+100.602400442" watchObservedRunningTime="2026-01-26 17:45:31.897380242 +0000 UTC m=+100.604516385" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.897723 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=10.897714221 podStartE2EDuration="10.897714221s" podCreationTimestamp="2026-01-26 17:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:31.855372307 +0000 UTC m=+100.562508440" watchObservedRunningTime="2026-01-26 17:45:31.897714221 +0000 UTC m=+100.604850354" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.950782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.950833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.950844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.950863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:31 crc kubenswrapper[4787]: I0126 17:45:31.950876 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:31Z","lastTransitionTime":"2026-01-26T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.052859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.052889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.052897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.052910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.052918 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.156038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.156086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.156095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.156109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.156117 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.259251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.259344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.259368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.259769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.259789 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.362548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.362599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.362609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.362621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.362630 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.464606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.464639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.464647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.464660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.464669 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.566829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.566898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.566910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.566925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.566935 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.589163 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.589240 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:32 crc kubenswrapper[4787]: E0126 17:45:32.589301 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.589352 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:32 crc kubenswrapper[4787]: E0126 17:45:32.589378 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:32 crc kubenswrapper[4787]: E0126 17:45:32.589536 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.603635 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:49:32.121633911 +0000 UTC Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.668613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.668649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.668658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.668671 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.668681 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.771676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.771705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.771713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.771724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.771733 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.874886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.874932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.874941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.874984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.874997 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.978114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.978169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.978181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.978198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:32 crc kubenswrapper[4787]: I0126 17:45:32.978209 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:32Z","lastTransitionTime":"2026-01-26T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.080108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.080172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.080184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.080202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.080214 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.182015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.182056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.182067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.182110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.182122 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.284250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.284302 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.284322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.284346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.284364 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.386622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.386667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.386677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.386690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.386699 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.488424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.488473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.488484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.488502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.488514 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.588724 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:33 crc kubenswrapper[4787]: E0126 17:45:33.588862 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.590613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.590678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.590694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.590719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.590736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.604496 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:41:06.149219804 +0000 UTC Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.692925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.693027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.693088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.693116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.693167 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.796292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.796354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.796365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.796385 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.796399 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.898687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.898767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.898779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.898800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:33 crc kubenswrapper[4787]: I0126 17:45:33.898815 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:33Z","lastTransitionTime":"2026-01-26T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.001416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.001513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.001528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.001557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.001573 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.105140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.105218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.105241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.105271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.105294 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.208223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.208279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.208294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.208316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.208333 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.311614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.311676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.311697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.311725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.311746 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.415098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.415136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.415149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.415164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.415172 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.518476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.518545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.518573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.518603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.518625 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.588597 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:34 crc kubenswrapper[4787]: E0126 17:45:34.588724 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.588738 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.588597 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:34 crc kubenswrapper[4787]: E0126 17:45:34.589011 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:34 crc kubenswrapper[4787]: E0126 17:45:34.589078 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.604850 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:47:43.448059403 +0000 UTC Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.621281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.621329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.621348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.621368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.621378 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.724491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.724553 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.724572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.724595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.724609 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.827893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.827972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.827989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.828017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.828043 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.932272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.932315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.932324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.932340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:34 crc kubenswrapper[4787]: I0126 17:45:34.932350 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:34Z","lastTransitionTime":"2026-01-26T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.034792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.034858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.034870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.034895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.034914 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:35Z","lastTransitionTime":"2026-01-26T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.138127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.138171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.138181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.138198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.138209 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:35Z","lastTransitionTime":"2026-01-26T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.240935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.240999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.241007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.241020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.241030 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:35Z","lastTransitionTime":"2026-01-26T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.343707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.343780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.343803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.343848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.343869 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:35Z","lastTransitionTime":"2026-01-26T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.376591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.376655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.376672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.376696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.376712 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:45:35Z","lastTransitionTime":"2026-01-26T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.444334 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr"] Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.445318 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.447659 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.447887 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.448057 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.449443 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.553693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60094a96-ebb9-4688-9555-0cdb8800260c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.554001 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60094a96-ebb9-4688-9555-0cdb8800260c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.554099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60094a96-ebb9-4688-9555-0cdb8800260c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.554185 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60094a96-ebb9-4688-9555-0cdb8800260c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.554270 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60094a96-ebb9-4688-9555-0cdb8800260c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.588822 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:35 crc kubenswrapper[4787]: E0126 17:45:35.589059 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.606038 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:26:56.61607005 +0000 UTC Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.606122 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.617416 4787 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.655745 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60094a96-ebb9-4688-9555-0cdb8800260c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.655811 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60094a96-ebb9-4688-9555-0cdb8800260c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.655878 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60094a96-ebb9-4688-9555-0cdb8800260c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.655931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60094a96-ebb9-4688-9555-0cdb8800260c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.656010 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60094a96-ebb9-4688-9555-0cdb8800260c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.656133 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60094a96-ebb9-4688-9555-0cdb8800260c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.656192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60094a96-ebb9-4688-9555-0cdb8800260c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.657197 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60094a96-ebb9-4688-9555-0cdb8800260c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.662374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60094a96-ebb9-4688-9555-0cdb8800260c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.678477 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60094a96-ebb9-4688-9555-0cdb8800260c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gtcmr\" (UID: \"60094a96-ebb9-4688-9555-0cdb8800260c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:35 crc kubenswrapper[4787]: I0126 17:45:35.766178 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" Jan 26 17:45:36 crc kubenswrapper[4787]: I0126 17:45:36.153317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" event={"ID":"60094a96-ebb9-4688-9555-0cdb8800260c","Type":"ContainerStarted","Data":"82b0cd501d73352cd871fa54fd183800edda400d5dcf81da355d5c5a9aad6e03"} Jan 26 17:45:36 crc kubenswrapper[4787]: I0126 17:45:36.588689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:36 crc kubenswrapper[4787]: I0126 17:45:36.588687 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:36 crc kubenswrapper[4787]: I0126 17:45:36.588838 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:36 crc kubenswrapper[4787]: E0126 17:45:36.589011 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:36 crc kubenswrapper[4787]: E0126 17:45:36.589191 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:36 crc kubenswrapper[4787]: E0126 17:45:36.589565 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:37 crc kubenswrapper[4787]: I0126 17:45:37.158427 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" event={"ID":"60094a96-ebb9-4688-9555-0cdb8800260c","Type":"ContainerStarted","Data":"a9dc7cf4dff11d8ec9771f9175cd93291482d72041e5aeded02a0b813494d3e7"} Jan 26 17:45:37 crc kubenswrapper[4787]: I0126 17:45:37.589400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:37 crc kubenswrapper[4787]: E0126 17:45:37.589698 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:38 crc kubenswrapper[4787]: I0126 17:45:38.589192 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:38 crc kubenswrapper[4787]: I0126 17:45:38.589237 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:38 crc kubenswrapper[4787]: I0126 17:45:38.589365 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:38 crc kubenswrapper[4787]: E0126 17:45:38.589538 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:38 crc kubenswrapper[4787]: E0126 17:45:38.589727 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:38 crc kubenswrapper[4787]: E0126 17:45:38.589927 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:39 crc kubenswrapper[4787]: I0126 17:45:39.588675 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:39 crc kubenswrapper[4787]: E0126 17:45:39.588801 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:40 crc kubenswrapper[4787]: I0126 17:45:40.588636 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:40 crc kubenswrapper[4787]: I0126 17:45:40.588764 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:40 crc kubenswrapper[4787]: I0126 17:45:40.588670 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:40 crc kubenswrapper[4787]: E0126 17:45:40.588868 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:40 crc kubenswrapper[4787]: E0126 17:45:40.589105 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:40 crc kubenswrapper[4787]: E0126 17:45:40.589225 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:41 crc kubenswrapper[4787]: I0126 17:45:41.588920 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:41 crc kubenswrapper[4787]: E0126 17:45:41.590350 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:42 crc kubenswrapper[4787]: I0126 17:45:42.588329 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:42 crc kubenswrapper[4787]: I0126 17:45:42.588438 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:42 crc kubenswrapper[4787]: I0126 17:45:42.588533 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:42 crc kubenswrapper[4787]: E0126 17:45:42.588639 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:42 crc kubenswrapper[4787]: E0126 17:45:42.588803 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:42 crc kubenswrapper[4787]: E0126 17:45:42.589031 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:43 crc kubenswrapper[4787]: I0126 17:45:43.589323 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:43 crc kubenswrapper[4787]: E0126 17:45:43.589792 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:44 crc kubenswrapper[4787]: I0126 17:45:44.588812 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:44 crc kubenswrapper[4787]: I0126 17:45:44.588812 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:44 crc kubenswrapper[4787]: I0126 17:45:44.589454 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:44 crc kubenswrapper[4787]: E0126 17:45:44.589444 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:44 crc kubenswrapper[4787]: E0126 17:45:44.589581 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:44 crc kubenswrapper[4787]: E0126 17:45:44.589690 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:44 crc kubenswrapper[4787]: I0126 17:45:44.590683 4787 scope.go:117] "RemoveContainer" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" Jan 26 17:45:44 crc kubenswrapper[4787]: E0126 17:45:44.590918 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cpbtq_openshift-ovn-kubernetes(474c6821-f8c5-400e-a584-0d63c13e0655)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" Jan 26 17:45:45 crc kubenswrapper[4787]: I0126 17:45:45.589255 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:45 crc kubenswrapper[4787]: E0126 17:45:45.589401 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:46 crc kubenswrapper[4787]: I0126 17:45:46.589206 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:46 crc kubenswrapper[4787]: I0126 17:45:46.589240 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:46 crc kubenswrapper[4787]: I0126 17:45:46.589251 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:46 crc kubenswrapper[4787]: E0126 17:45:46.589350 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:46 crc kubenswrapper[4787]: E0126 17:45:46.589457 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:46 crc kubenswrapper[4787]: E0126 17:45:46.589603 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.208839 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/1.log" Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.209758 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/0.log" Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.209838 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2e50ad1-82f9-48f0-a103-6d584a3fa02e" containerID="e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff" exitCode=1 Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.209899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerDied","Data":"e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff"} Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.210078 4787 scope.go:117] "RemoveContainer" containerID="6d005e3841226f6819d477aae96c8ff86b45b21ef774436e5b001e0755f83648" Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.211378 4787 scope.go:117] "RemoveContainer" containerID="e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff" Jan 26 17:45:47 crc kubenswrapper[4787]: E0126 17:45:47.211713 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-65mpd_openshift-multus(d2e50ad1-82f9-48f0-a103-6d584a3fa02e)\"" pod="openshift-multus/multus-65mpd" podUID="d2e50ad1-82f9-48f0-a103-6d584a3fa02e" Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.234880 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gtcmr" podStartSLOduration=96.234855211 podStartE2EDuration="1m36.234855211s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:37.17768591 +0000 UTC m=+105.884822053" watchObservedRunningTime="2026-01-26 17:45:47.234855211 +0000 UTC m=+115.941991364" Jan 26 17:45:47 crc kubenswrapper[4787]: I0126 17:45:47.589370 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:47 crc kubenswrapper[4787]: E0126 17:45:47.589545 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:48 crc kubenswrapper[4787]: I0126 17:45:48.214841 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/1.log" Jan 26 17:45:48 crc kubenswrapper[4787]: I0126 17:45:48.588237 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:48 crc kubenswrapper[4787]: E0126 17:45:48.588363 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:48 crc kubenswrapper[4787]: I0126 17:45:48.588462 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:48 crc kubenswrapper[4787]: I0126 17:45:48.588237 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:48 crc kubenswrapper[4787]: E0126 17:45:48.588614 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:48 crc kubenswrapper[4787]: E0126 17:45:48.588799 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:49 crc kubenswrapper[4787]: I0126 17:45:49.588571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:49 crc kubenswrapper[4787]: E0126 17:45:49.590282 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:50 crc kubenswrapper[4787]: I0126 17:45:50.588176 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:50 crc kubenswrapper[4787]: I0126 17:45:50.588203 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:50 crc kubenswrapper[4787]: I0126 17:45:50.589103 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:50 crc kubenswrapper[4787]: E0126 17:45:50.589022 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:50 crc kubenswrapper[4787]: E0126 17:45:50.589430 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:50 crc kubenswrapper[4787]: E0126 17:45:50.589798 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:51 crc kubenswrapper[4787]: I0126 17:45:51.588878 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:51 crc kubenswrapper[4787]: E0126 17:45:51.589794 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:51 crc kubenswrapper[4787]: E0126 17:45:51.620493 4787 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 17:45:51 crc kubenswrapper[4787]: E0126 17:45:51.695375 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:45:52 crc kubenswrapper[4787]: I0126 17:45:52.588294 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:52 crc kubenswrapper[4787]: I0126 17:45:52.588370 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:52 crc kubenswrapper[4787]: I0126 17:45:52.588439 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:52 crc kubenswrapper[4787]: E0126 17:45:52.588480 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:52 crc kubenswrapper[4787]: E0126 17:45:52.588542 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:52 crc kubenswrapper[4787]: E0126 17:45:52.588892 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:53 crc kubenswrapper[4787]: I0126 17:45:53.588879 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:53 crc kubenswrapper[4787]: E0126 17:45:53.589132 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:54 crc kubenswrapper[4787]: I0126 17:45:54.588616 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:54 crc kubenswrapper[4787]: I0126 17:45:54.588686 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:54 crc kubenswrapper[4787]: I0126 17:45:54.588640 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:54 crc kubenswrapper[4787]: E0126 17:45:54.588802 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:54 crc kubenswrapper[4787]: E0126 17:45:54.588892 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:54 crc kubenswrapper[4787]: E0126 17:45:54.589071 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:55 crc kubenswrapper[4787]: I0126 17:45:55.588589 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:55 crc kubenswrapper[4787]: E0126 17:45:55.588805 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:55 crc kubenswrapper[4787]: I0126 17:45:55.589708 4787 scope.go:117] "RemoveContainer" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.242791 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/3.log" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.246374 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerStarted","Data":"f68cfb6d7c911e9a9cee20fb12a2a1d01f7639c5f08d063e61c29dc4f8da7bc2"} Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.246731 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.281276 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podStartSLOduration=105.28125972 podStartE2EDuration="1m45.28125972s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:56.281069655 +0000 UTC m=+124.988205798" watchObservedRunningTime="2026-01-26 17:45:56.28125972 +0000 UTC m=+124.988395843" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.463052 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vkdfd"] Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.463160 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:56 crc kubenswrapper[4787]: E0126 17:45:56.463279 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.588256 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:56 crc kubenswrapper[4787]: E0126 17:45:56.588416 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.588608 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:56 crc kubenswrapper[4787]: E0126 17:45:56.588675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:56 crc kubenswrapper[4787]: I0126 17:45:56.588797 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:56 crc kubenswrapper[4787]: E0126 17:45:56.588862 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:45:56 crc kubenswrapper[4787]: E0126 17:45:56.696782 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:45:58 crc kubenswrapper[4787]: I0126 17:45:58.588751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:45:58 crc kubenswrapper[4787]: I0126 17:45:58.588751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:45:58 crc kubenswrapper[4787]: E0126 17:45:58.589408 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:45:58 crc kubenswrapper[4787]: I0126 17:45:58.588944 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:45:58 crc kubenswrapper[4787]: E0126 17:45:58.589516 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:45:58 crc kubenswrapper[4787]: I0126 17:45:58.588805 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:45:58 crc kubenswrapper[4787]: E0126 17:45:58.589596 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:45:58 crc kubenswrapper[4787]: E0126 17:45:58.589719 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:46:00 crc kubenswrapper[4787]: I0126 17:46:00.589237 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:00 crc kubenswrapper[4787]: I0126 17:46:00.589280 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:00 crc kubenswrapper[4787]: I0126 17:46:00.589349 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:00 crc kubenswrapper[4787]: I0126 17:46:00.589371 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:00 crc kubenswrapper[4787]: E0126 17:46:00.589508 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:46:00 crc kubenswrapper[4787]: E0126 17:46:00.589757 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:46:00 crc kubenswrapper[4787]: E0126 17:46:00.589849 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:46:00 crc kubenswrapper[4787]: I0126 17:46:00.589913 4787 scope.go:117] "RemoveContainer" containerID="e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff" Jan 26 17:46:00 crc kubenswrapper[4787]: E0126 17:46:00.590004 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:46:01 crc kubenswrapper[4787]: I0126 17:46:01.265331 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/1.log" Jan 26 17:46:01 crc kubenswrapper[4787]: I0126 17:46:01.265436 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerStarted","Data":"e090b2e5f3f08a1ac007a434bb7a458de7e6770d683fa1c18270c831b8aa5db5"} Jan 26 17:46:01 crc kubenswrapper[4787]: E0126 17:46:01.697274 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:46:02 crc kubenswrapper[4787]: I0126 17:46:02.588466 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:02 crc kubenswrapper[4787]: E0126 17:46:02.588897 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:46:02 crc kubenswrapper[4787]: I0126 17:46:02.588556 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:02 crc kubenswrapper[4787]: E0126 17:46:02.589189 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:46:02 crc kubenswrapper[4787]: I0126 17:46:02.588502 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:02 crc kubenswrapper[4787]: E0126 17:46:02.589419 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:46:02 crc kubenswrapper[4787]: I0126 17:46:02.588605 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:02 crc kubenswrapper[4787]: E0126 17:46:02.589655 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:46:04 crc kubenswrapper[4787]: I0126 17:46:04.589104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:04 crc kubenswrapper[4787]: I0126 17:46:04.589115 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:04 crc kubenswrapper[4787]: E0126 17:46:04.589798 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:46:04 crc kubenswrapper[4787]: I0126 17:46:04.589223 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:04 crc kubenswrapper[4787]: I0126 17:46:04.589153 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:04 crc kubenswrapper[4787]: E0126 17:46:04.589983 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:46:04 crc kubenswrapper[4787]: E0126 17:46:04.589904 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:46:04 crc kubenswrapper[4787]: E0126 17:46:04.590091 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:46:06 crc kubenswrapper[4787]: I0126 17:46:06.588989 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:06 crc kubenswrapper[4787]: I0126 17:46:06.589072 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:06 crc kubenswrapper[4787]: I0126 17:46:06.589009 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:06 crc kubenswrapper[4787]: I0126 17:46:06.588998 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:06 crc kubenswrapper[4787]: E0126 17:46:06.590145 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:46:06 crc kubenswrapper[4787]: E0126 17:46:06.590194 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vkdfd" podUID="f04b2906-5567-4455-a1e8-5d85d5ea882e" Jan 26 17:46:06 crc kubenswrapper[4787]: E0126 17:46:06.590215 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:46:06 crc kubenswrapper[4787]: E0126 17:46:06.590245 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.589274 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.589356 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.589563 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.590074 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.593122 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.593129 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.593173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.593308 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.593406 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 17:46:08 crc kubenswrapper[4787]: I0126 17:46:08.593406 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.380466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.417490 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7fgjp"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.417908 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.422184 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-48jfn"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.422568 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.422803 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.423168 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.424820 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gndbr"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.425541 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.425998 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.426694 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.429020 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.430829 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.431447 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.431663 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.432220 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.432527 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.433268 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.433524 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.436405 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.436915 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.437204 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.437512 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.437692 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.437783 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.438354 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.438750 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.438817 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.439311 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.446485 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.446627 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.448352 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.449835 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.451746 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-n27mc"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.452118 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.454611 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.454738 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.454820 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.454972 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.455293 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.455585 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.455792 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.456136 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.456422 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.456575 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.456700 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.456978 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-khb2f"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.457293 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.458013 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.458292 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.458987 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.459236 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.459360 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.459493 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.459695 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.459898 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.460031 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.460132 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.460229 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.482264 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.482376 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.483769 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.483850 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.504841 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.505148 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.505811 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.506015 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.506042 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.506132 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.506216 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.506310 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.507182 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9fdv8"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.508878 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qgmbn"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.509529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.509654 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.509861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510634 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510686 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510711 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510740 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-serving-cert\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c75e61af-58ba-4e57-b3af-2c90ad1e2502-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510826 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-client-ca\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510850 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-etcd-client\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510877 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510901 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510924 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz22j\" (UniqueName: \"kubernetes.io/projected/0b84b9f3-fe68-41db-baba-bee06c16d520-kube-api-access-jz22j\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510964 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.510991 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511016 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wgg\" (UniqueName: \"kubernetes.io/projected/704498ce-b71f-4fcb-b152-bdab533c235f-kube-api-access-52wgg\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gtsm\" (UniqueName: \"kubernetes.io/projected/fc5e8dd5-46e7-4849-b278-d1397195e659-kube-api-access-8gtsm\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511075 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-config\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511100 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511121 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c75e61af-58ba-4e57-b3af-2c90ad1e2502-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511143 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-serving-cert\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511166 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-encryption-config\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511189 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgd6\" (UniqueName: \"kubernetes.io/projected/d8c4a6bf-9404-4c15-acc5-9563d03b7c47-kube-api-access-wdgd6\") pod \"cluster-samples-operator-665b6dd947-w88rv\" (UID: \"d8c4a6bf-9404-4c15-acc5-9563d03b7c47\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511211 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-policies\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-console-config\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511257 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/704498ce-b71f-4fcb-b152-bdab533c235f-trusted-ca\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511282 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511303 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-trusted-ca-bundle\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511327 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511509 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0101e33c-84b1-4777-b381-345e7fa3397b-serving-cert\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511585 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tns2\" (UniqueName: \"kubernetes.io/projected/0101e33c-84b1-4777-b381-345e7fa3397b-kube-api-access-8tns2\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511671 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511713 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d8dae22-5db2-4c03-b8e8-f20c3a911957-machine-approver-tls\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511878 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chzs\" (UniqueName: \"kubernetes.io/projected/c75e61af-58ba-4e57-b3af-2c90ad1e2502-kube-api-access-7chzs\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.511938 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc208e27-2a4a-49a5-b3b4-1880249b93ed-serving-cert\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512025 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512061 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8dae22-5db2-4c03-b8e8-f20c3a911957-auth-proxy-config\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512092 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8c4a6bf-9404-4c15-acc5-9563d03b7c47-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w88rv\" (UID: \"d8c4a6bf-9404-4c15-acc5-9563d03b7c47\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-audit-policies\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704498ce-b71f-4fcb-b152-bdab533c235f-serving-cert\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512251 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-oauth-serving-cert\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt44s\" (UniqueName: \"kubernetes.io/projected/11106d86-1e86-47cf-907d-9fb690a4f56e-kube-api-access-qt44s\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512454 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8dae22-5db2-4c03-b8e8-f20c3a911957-config\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512504 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-dir\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-config\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512585 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b84b9f3-fe68-41db-baba-bee06c16d520-audit-dir\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512617 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704498ce-b71f-4fcb-b152-bdab533c235f-config\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512656 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rjw\" (UniqueName: \"kubernetes.io/projected/9d8dae22-5db2-4c03-b8e8-f20c3a911957-kube-api-access-d4rjw\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4r6\" (UniqueName: \"kubernetes.io/projected/bc208e27-2a4a-49a5-b3b4-1880249b93ed-kube-api-access-hg4r6\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-service-ca\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512772 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-service-ca-bundle\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.512923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-oauth-config\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.513190 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8l69c"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.513833 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.516769 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.517285 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.524322 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.524521 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.524586 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.524995 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.526666 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.526779 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b7v9x"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.526914 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.532154 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.532205 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7frg7"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.532650 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxjzt"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.532939 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.533088 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.533509 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.533522 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.533570 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.556434 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.557297 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.557630 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.557815 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.558072 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.558337 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.558432 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.559538 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.562288 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.562851 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.563868 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.564586 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.564719 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565279 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565360 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565413 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565596 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565730 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565908 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.566120 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.565605 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.566245 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.566497 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.566752 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.567048 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.568217 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.581852 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.582439 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.582520 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.582626 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.582719 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.582792 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.584723 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.584994 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.585654 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.585991 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.586637 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.586753 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.587109 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.587210 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gnbmj"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.587890 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.588305 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q2957"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.588645 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.589176 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.589365 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.589551 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.589659 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.589768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.590085 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.591703 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.595314 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.595395 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.595713 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.595917 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.596247 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.596445 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.600022 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.600171 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.602386 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.607174 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.607254 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.608162 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.608558 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.608635 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7fgjp"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.608663 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fjcmx"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.608883 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.609816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.610133 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.610773 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.611117 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.612084 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.612157 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.612728 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613184 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613556 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613814 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-audit-policies\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613857 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613884 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704498ce-b71f-4fcb-b152-bdab533c235f-serving-cert\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613913 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dad5a6b-7a38-44ac-938b-f0125ca82924-metrics-tls\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.613939 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khmwx\" (UniqueName: \"kubernetes.io/projected/4dad5a6b-7a38-44ac-938b-f0125ca82924-kube-api-access-khmwx\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614121 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614164 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-config\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614186 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgtr\" (UniqueName: \"kubernetes.io/projected/60f15cf8-bef6-4663-80f3-882d0cb9c415-kube-api-access-dhgtr\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614256 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-oauth-serving-cert\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614297 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50afe70f-eb72-473a-8c31-0841be85a3ca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614348 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9z9\" (UniqueName: \"kubernetes.io/projected/7ae4d848-8522-4dbf-bc18-5292c04f6a38-kube-api-access-bv9z9\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-config\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614749 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-config\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614783 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614367 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614808 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-image-import-ca\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.614978 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt44s\" (UniqueName: \"kubernetes.io/projected/11106d86-1e86-47cf-907d-9fb690a4f56e-kube-api-access-qt44s\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e3951631-385f-42ea-8f84-0f208fc807b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144a9368-752c-404b-9881-4a03a930b77a-audit-dir\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8dae22-5db2-4c03-b8e8-f20c3a911957-config\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615097 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-dir\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615126 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-default-certificate\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615150 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-config\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615176 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-dir\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615182 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-etcd-client\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615231 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b84b9f3-fe68-41db-baba-bee06c16d520-audit-dir\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615257 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704498ce-b71f-4fcb-b152-bdab533c235f-config\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a662548-4579-4ff8-ae50-97fbc67542a8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615313 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4rjw\" (UniqueName: \"kubernetes.io/projected/9d8dae22-5db2-4c03-b8e8-f20c3a911957-kube-api-access-d4rjw\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615347 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4drf\" (UniqueName: \"kubernetes.io/projected/f5c13435-6ec0-4a59-b905-7c4ca0d5978f-kube-api-access-l4drf\") pod \"downloads-7954f5f757-qgmbn\" (UID: \"f5c13435-6ec0-4a59-b905-7c4ca0d5978f\") " pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615377 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615515 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b84b9f3-fe68-41db-baba-bee06c16d520-audit-dir\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4r6\" (UniqueName: \"kubernetes.io/projected/bc208e27-2a4a-49a5-b3b4-1880249b93ed-kube-api-access-hg4r6\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-service-ca\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-service-ca-bundle\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615623 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-config\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-oauth-config\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50afe70f-eb72-473a-8c31-0841be85a3ca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615707 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60f15cf8-bef6-4663-80f3-882d0cb9c415-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f3bf44f-58af-47b2-b981-8c1806ce06c5-metrics-tls\") pod \"dns-operator-744455d44c-7frg7\" (UID: \"0f3bf44f-58af-47b2-b981-8c1806ce06c5\") " pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615757 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2rw\" (UniqueName: \"kubernetes.io/projected/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-kube-api-access-jr2rw\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615779 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-config\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-serving-cert\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615830 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615859 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615883 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-stats-auth\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.615997 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92fp7\" (UniqueName: \"kubernetes.io/projected/e3951631-385f-42ea-8f84-0f208fc807b5-kube-api-access-92fp7\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/144a9368-752c-404b-9881-4a03a930b77a-node-pullsecrets\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616067 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqql\" (UniqueName: \"kubernetes.io/projected/144a9368-752c-404b-9881-4a03a930b77a-kube-api-access-hdqql\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f15cf8-bef6-4663-80f3-882d0cb9c415-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616117 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-serving-cert\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616142 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616169 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c75e61af-58ba-4e57-b3af-2c90ad1e2502-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616197 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpfs\" (UniqueName: \"kubernetes.io/projected/eef0b608-0578-4c92-92a2-ab1f6aa787bf-kube-api-access-mqpfs\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-etcd-client\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616252 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616294 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-client-ca\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616345 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec03f76-9431-4d70-84aa-c1073d8d5e44-serving-cert\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616367 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-service-ca\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616388 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-encryption-config\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz22j\" (UniqueName: \"kubernetes.io/projected/0b84b9f3-fe68-41db-baba-bee06c16d520-kube-api-access-jz22j\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616434 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-ca\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616515 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52wgg\" (UniqueName: \"kubernetes.io/projected/704498ce-b71f-4fcb-b152-bdab533c235f-kube-api-access-52wgg\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616538 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae4d848-8522-4dbf-bc18-5292c04f6a38-service-ca-bundle\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616562 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gtsm\" (UniqueName: \"kubernetes.io/projected/fc5e8dd5-46e7-4849-b278-d1397195e659-kube-api-access-8gtsm\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616582 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-client-ca\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616588 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704498ce-b71f-4fcb-b152-bdab533c235f-config\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616603 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dad5a6b-7a38-44ac-938b-f0125ca82924-trusted-ca\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616586 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8dae22-5db2-4c03-b8e8-f20c3a911957-config\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616629 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50afe70f-eb72-473a-8c31-0841be85a3ca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-audit\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-config\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c75e61af-58ba-4e57-b3af-2c90ad1e2502-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616755 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3951631-385f-42ea-8f84-0f208fc807b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-serving-cert\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616803 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-client\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616821 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-metrics-certs\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616888 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-images\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616919 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-encryption-config\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616962 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgd6\" (UniqueName: \"kubernetes.io/projected/d8c4a6bf-9404-4c15-acc5-9563d03b7c47-kube-api-access-wdgd6\") pod \"cluster-samples-operator-665b6dd947-w88rv\" (UID: \"d8c4a6bf-9404-4c15-acc5-9563d03b7c47\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.616984 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqkn\" (UniqueName: \"kubernetes.io/projected/0f3bf44f-58af-47b2-b981-8c1806ce06c5-kube-api-access-tdqkn\") pod \"dns-operator-744455d44c-7frg7\" (UID: \"0f3bf44f-58af-47b2-b981-8c1806ce06c5\") " pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617060 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-policies\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617173 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-console-config\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a662548-4579-4ff8-ae50-97fbc67542a8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617230 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/704498ce-b71f-4fcb-b152-bdab533c235f-trusted-ca\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617246 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-etcd-serving-ca\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617302 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0101e33c-84b1-4777-b381-345e7fa3397b-serving-cert\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617366 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-service-ca\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617540 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-oauth-serving-cert\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-trusted-ca-bundle\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tns2\" (UniqueName: \"kubernetes.io/projected/0101e33c-84b1-4777-b381-345e7fa3397b-kube-api-access-8tns2\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jkb\" (UniqueName: \"kubernetes.io/projected/3ec03f76-9431-4d70-84aa-c1073d8d5e44-kube-api-access-45jkb\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617831 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a662548-4579-4ff8-ae50-97fbc67542a8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617868 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617886 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4dad5a6b-7a38-44ac-938b-f0125ca82924-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg97h\" (UniqueName: \"kubernetes.io/projected/50afe70f-eb72-473a-8c31-0841be85a3ca-kube-api-access-tg97h\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.617959 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d8dae22-5db2-4c03-b8e8-f20c3a911957-machine-approver-tls\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618029 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618065 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chzs\" (UniqueName: \"kubernetes.io/projected/c75e61af-58ba-4e57-b3af-2c90ad1e2502-kube-api-access-7chzs\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618090 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eef0b608-0578-4c92-92a2-ab1f6aa787bf-serving-cert\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618115 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8c4a6bf-9404-4c15-acc5-9563d03b7c47-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w88rv\" (UID: \"d8c4a6bf-9404-4c15-acc5-9563d03b7c47\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618137 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618167 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc208e27-2a4a-49a5-b3b4-1880249b93ed-serving-cert\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618189 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618210 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8dae22-5db2-4c03-b8e8-f20c3a911957-auth-proxy-config\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.620382 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.622566 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mcn6"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.623140 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-service-ca-bundle\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.623260 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.623419 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.623752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-config\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.623801 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.624021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-audit-policies\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.624484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0101e33c-84b1-4777-b381-345e7fa3397b-config\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.624617 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8dae22-5db2-4c03-b8e8-f20c3a911957-auth-proxy-config\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.625006 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704498ce-b71f-4fcb-b152-bdab533c235f-serving-cert\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.625175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-serving-cert\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.625382 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.628667 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.629168 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.629426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.630203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-oauth-config\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.630423 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-client-ca\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.618438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c75e61af-58ba-4e57-b3af-2c90ad1e2502-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.632005 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-trusted-ca-bundle\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.632336 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0101e33c-84b1-4777-b381-345e7fa3397b-serving-cert\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.632621 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.634213 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-console-config\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.635051 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.635062 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.635567 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.635688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.636212 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.636680 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-policies\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.636675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.637679 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8c4a6bf-9404-4c15-acc5-9563d03b7c47-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w88rv\" (UID: \"d8c4a6bf-9404-4c15-acc5-9563d03b7c47\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.639414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.640154 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b84b9f3-fe68-41db-baba-bee06c16d520-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.641121 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.641371 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d8dae22-5db2-4c03-b8e8-f20c3a911957-machine-approver-tls\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.641613 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-encryption-config\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.641800 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.642104 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c75e61af-58ba-4e57-b3af-2c90ad1e2502-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.643002 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/704498ce-b71f-4fcb-b152-bdab533c235f-trusted-ca\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.643063 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ltk52"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.645104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.648440 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.649674 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.651594 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cq2k4"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.652898 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.654263 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.654718 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-etcd-client\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.655453 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.657295 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc208e27-2a4a-49a5-b3b4-1880249b93ed-serving-cert\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.653890 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b84b9f3-fe68-41db-baba-bee06c16d520-serving-cert\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.661929 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-48jfn"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.662047 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.663272 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.665612 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gndbr"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.665650 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-khb2f"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.667018 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.669114 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xxrkw"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.670085 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.670206 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.672184 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxjzt"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.674761 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-n27mc"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.676929 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.678061 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9fdv8"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.679783 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.681402 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7frg7"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.682841 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.684099 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q2957"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.685753 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.688070 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.688683 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.690423 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b7v9x"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.691631 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.693289 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.694627 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qgmbn"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.696084 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.697165 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nln2k"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.698303 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.698637 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.699825 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.699883 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.701065 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ltk52"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.702418 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.703551 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.704630 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.705961 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8l69c"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.707248 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.708695 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xxrkw"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.709579 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.710715 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.711836 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fjcmx"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.713026 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cq2k4"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.714568 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.715682 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mcn6"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.716496 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-phjkm"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718172 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nmw59"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718431 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-stats-auth\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718878 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92fp7\" (UniqueName: \"kubernetes.io/projected/e3951631-385f-42ea-8f84-0f208fc807b5-kube-api-access-92fp7\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/144a9368-752c-404b-9881-4a03a930b77a-node-pullsecrets\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718938 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqql\" (UniqueName: \"kubernetes.io/projected/144a9368-752c-404b-9881-4a03a930b77a-kube-api-access-hdqql\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.718975 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f15cf8-bef6-4663-80f3-882d0cb9c415-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719003 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqpfs\" (UniqueName: \"kubernetes.io/projected/eef0b608-0578-4c92-92a2-ab1f6aa787bf-kube-api-access-mqpfs\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719044 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec03f76-9431-4d70-84aa-c1073d8d5e44-serving-cert\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-service-ca\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719093 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-encryption-config\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719103 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/144a9368-752c-404b-9881-4a03a930b77a-node-pullsecrets\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719125 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-ca\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719175 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae4d848-8522-4dbf-bc18-5292c04f6a38-service-ca-bundle\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719210 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-client-ca\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719238 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dad5a6b-7a38-44ac-938b-f0125ca82924-trusted-ca\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50afe70f-eb72-473a-8c31-0841be85a3ca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-audit\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719333 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3951631-385f-42ea-8f84-0f208fc807b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719363 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-client\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-metrics-certs\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719412 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-images\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719451 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqkn\" (UniqueName: \"kubernetes.io/projected/0f3bf44f-58af-47b2-b981-8c1806ce06c5-kube-api-access-tdqkn\") pod \"dns-operator-744455d44c-7frg7\" (UID: \"0f3bf44f-58af-47b2-b981-8c1806ce06c5\") " pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a662548-4579-4ff8-ae50-97fbc67542a8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719512 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-etcd-serving-ca\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719550 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jkb\" (UniqueName: \"kubernetes.io/projected/3ec03f76-9431-4d70-84aa-c1073d8d5e44-kube-api-access-45jkb\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719577 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a662548-4579-4ff8-ae50-97fbc67542a8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719615 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4dad5a6b-7a38-44ac-938b-f0125ca82924-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719643 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg97h\" (UniqueName: \"kubernetes.io/projected/50afe70f-eb72-473a-8c31-0841be85a3ca-kube-api-access-tg97h\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719680 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719692 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eef0b608-0578-4c92-92a2-ab1f6aa787bf-serving-cert\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dad5a6b-7a38-44ac-938b-f0125ca82924-metrics-tls\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719771 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khmwx\" (UniqueName: \"kubernetes.io/projected/4dad5a6b-7a38-44ac-938b-f0125ca82924-kube-api-access-khmwx\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719797 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719822 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-config\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719847 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgtr\" (UniqueName: \"kubernetes.io/projected/60f15cf8-bef6-4663-80f3-882d0cb9c415-kube-api-access-dhgtr\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719872 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50afe70f-eb72-473a-8c31-0841be85a3ca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719908 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv9z9\" (UniqueName: \"kubernetes.io/projected/7ae4d848-8522-4dbf-bc18-5292c04f6a38-kube-api-access-bv9z9\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-config\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.719977 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-config\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720007 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-image-import-ca\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720068 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e3951631-385f-42ea-8f84-0f208fc807b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720091 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144a9368-752c-404b-9881-4a03a930b77a-audit-dir\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720121 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-default-certificate\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-etcd-client\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a662548-4579-4ff8-ae50-97fbc67542a8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720215 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4drf\" (UniqueName: \"kubernetes.io/projected/f5c13435-6ec0-4a59-b905-7c4ca0d5978f-kube-api-access-l4drf\") pod \"downloads-7954f5f757-qgmbn\" (UID: \"f5c13435-6ec0-4a59-b905-7c4ca0d5978f\") " pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720252 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-config\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720280 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50afe70f-eb72-473a-8c31-0841be85a3ca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720309 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60f15cf8-bef6-4663-80f3-882d0cb9c415-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720332 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f3bf44f-58af-47b2-b981-8c1806ce06c5-metrics-tls\") pod \"dns-operator-744455d44c-7frg7\" (UID: \"0f3bf44f-58af-47b2-b981-8c1806ce06c5\") " pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720355 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2rw\" (UniqueName: \"kubernetes.io/projected/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-kube-api-access-jr2rw\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720381 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-config\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720403 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-serving-cert\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.720801 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-client-ca\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.721352 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-audit\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.721840 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-config\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.721965 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-config\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.722276 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-etcd-serving-ca\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.722731 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-encryption-config\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.723018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-image-import-ca\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.723188 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144a9368-752c-404b-9881-4a03a930b77a-audit-dir\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.723377 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec03f76-9431-4d70-84aa-c1073d8d5e44-serving-cert\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.723691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.723834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e3951631-385f-42ea-8f84-0f208fc807b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.724134 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-config\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.724169 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144a9368-752c-404b-9881-4a03a930b77a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.724790 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-phjkm"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.724922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.725099 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3951631-385f-42ea-8f84-0f208fc807b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.725370 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nmw59"] Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.725527 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a662548-4579-4ff8-ae50-97fbc67542a8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.725589 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-config\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.725692 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-images\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.726508 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50afe70f-eb72-473a-8c31-0841be85a3ca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.727073 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.727330 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a662548-4579-4ff8-ae50-97fbc67542a8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.728671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-etcd-client\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.732275 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f3bf44f-58af-47b2-b981-8c1806ce06c5-metrics-tls\") pod \"dns-operator-744455d44c-7frg7\" (UID: \"0f3bf44f-58af-47b2-b981-8c1806ce06c5\") " pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.733013 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/144a9368-752c-404b-9881-4a03a930b77a-serving-cert\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.740322 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.760212 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.780742 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.800338 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.807790 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.808014 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.809044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-default-certificate\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.822937 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.833173 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-stats-auth\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.840556 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.845794 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ae4d848-8522-4dbf-bc18-5292c04f6a38-metrics-certs\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.861318 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.880582 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.890937 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae4d848-8522-4dbf-bc18-5292c04f6a38-service-ca-bundle\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.900131 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.920326 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.927415 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eef0b608-0578-4c92-92a2-ab1f6aa787bf-serving-cert\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.940550 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.943926 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-client\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.960696 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.981199 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 17:46:16 crc kubenswrapper[4787]: I0126 17:46:16.987114 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-config\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.000508 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.010433 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-ca\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.019367 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.020257 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eef0b608-0578-4c92-92a2-ab1f6aa787bf-etcd-service-ca\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.039539 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.060122 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.077512 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50afe70f-eb72-473a-8c31-0841be85a3ca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.081321 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.101303 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.122994 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.140060 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.160784 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.180704 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.188478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dad5a6b-7a38-44ac-938b-f0125ca82924-metrics-tls\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.213046 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.222729 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4dad5a6b-7a38-44ac-938b-f0125ca82924-trusted-ca\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.240931 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.261483 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.280748 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.288430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60f15cf8-bef6-4663-80f3-882d0cb9c415-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.300871 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.320710 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.340697 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.351134 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f15cf8-bef6-4663-80f3-882d0cb9c415-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.360880 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.381415 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.400786 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.420202 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.441049 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.460551 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.480871 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.501173 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.520548 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.541539 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.560155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.580805 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.600515 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.618500 4787 request.go:700] Waited for 1.006108932s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.625381 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.639297 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.659365 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.681082 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.700395 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.720748 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.742583 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.760461 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.781009 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.801418 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.821303 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.859497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt44s\" (UniqueName: \"kubernetes.io/projected/11106d86-1e86-47cf-907d-9fb690a4f56e-kube-api-access-qt44s\") pod \"console-f9d7485db-gndbr\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.877606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4r6\" (UniqueName: \"kubernetes.io/projected/bc208e27-2a4a-49a5-b3b4-1880249b93ed-kube-api-access-hg4r6\") pod \"controller-manager-879f6c89f-7fgjp\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.883302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.914593 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4rjw\" (UniqueName: \"kubernetes.io/projected/9d8dae22-5db2-4c03-b8e8-f20c3a911957-kube-api-access-d4rjw\") pod \"machine-approver-56656f9798-shkrx\" (UID: \"9d8dae22-5db2-4c03-b8e8-f20c3a911957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.919986 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.940143 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.941616 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.961146 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.978723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" Jan 26 17:46:17 crc kubenswrapper[4787]: I0126 17:46:17.980128 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 17:46:17 crc kubenswrapper[4787]: W0126 17:46:17.995667 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d8dae22_5db2_4c03_b8e8_f20c3a911957.slice/crio-60b56e9cdc8085e9c2e61654e8673374be62d8c63722262784843010b10420e7 WatchSource:0}: Error finding container 60b56e9cdc8085e9c2e61654e8673374be62d8c63722262784843010b10420e7: Status 404 returned error can't find the container with id 60b56e9cdc8085e9c2e61654e8673374be62d8c63722262784843010b10420e7 Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.005597 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.020278 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.058695 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.062840 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52wgg\" (UniqueName: \"kubernetes.io/projected/704498ce-b71f-4fcb-b152-bdab533c235f-kube-api-access-52wgg\") pod \"console-operator-58897d9998-khb2f\" (UID: \"704498ce-b71f-4fcb-b152-bdab533c235f\") " pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.082104 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz22j\" (UniqueName: \"kubernetes.io/projected/0b84b9f3-fe68-41db-baba-bee06c16d520-kube-api-access-jz22j\") pod \"apiserver-7bbb656c7d-7djvj\" (UID: \"0b84b9f3-fe68-41db-baba-bee06c16d520\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.087865 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.096623 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chzs\" (UniqueName: \"kubernetes.io/projected/c75e61af-58ba-4e57-b3af-2c90ad1e2502-kube-api-access-7chzs\") pod \"openshift-apiserver-operator-796bbdcf4f-qtc46\" (UID: \"c75e61af-58ba-4e57-b3af-2c90ad1e2502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.100836 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.111180 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.139091 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgd6\" (UniqueName: \"kubernetes.io/projected/d8c4a6bf-9404-4c15-acc5-9563d03b7c47-kube-api-access-wdgd6\") pod \"cluster-samples-operator-665b6dd947-w88rv\" (UID: \"d8c4a6bf-9404-4c15-acc5-9563d03b7c47\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.158719 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tns2\" (UniqueName: \"kubernetes.io/projected/0101e33c-84b1-4777-b381-345e7fa3397b-kube-api-access-8tns2\") pod \"authentication-operator-69f744f599-n27mc\" (UID: \"0101e33c-84b1-4777-b381-345e7fa3397b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.172418 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7fgjp"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.178543 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gtsm\" (UniqueName: \"kubernetes.io/projected/fc5e8dd5-46e7-4849-b278-d1397195e659-kube-api-access-8gtsm\") pod \"oauth-openshift-558db77b4-48jfn\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.184173 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: W0126 17:46:18.197074 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc208e27_2a4a_49a5_b3b4_1880249b93ed.slice/crio-bf5336230ee124a20173854b2b3922fe2bc6de9531f69455b3e57b246100b79f WatchSource:0}: Error finding container bf5336230ee124a20173854b2b3922fe2bc6de9531f69455b3e57b246100b79f: Status 404 returned error can't find the container with id bf5336230ee124a20173854b2b3922fe2bc6de9531f69455b3e57b246100b79f Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.201940 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.221558 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.241572 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.249902 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gndbr"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.253519 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.261429 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.280455 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.291905 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.301010 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 17:46:18 crc kubenswrapper[4787]: W0126 17:46:18.308061 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b84b9f3_fe68_41db_baba_bee06c16d520.slice/crio-40c105bec3ce75812b928a8cefd080c256422286c93e533f1bc0ec3101eb7c29 WatchSource:0}: Error finding container 40c105bec3ce75812b928a8cefd080c256422286c93e533f1bc0ec3101eb7c29: Status 404 returned error can't find the container with id 40c105bec3ce75812b928a8cefd080c256422286c93e533f1bc0ec3101eb7c29 Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.311159 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.320318 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.323829 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-khb2f"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.325629 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" event={"ID":"0b84b9f3-fe68-41db-baba-bee06c16d520","Type":"ContainerStarted","Data":"40c105bec3ce75812b928a8cefd080c256422286c93e533f1bc0ec3101eb7c29"} Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.328898 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gndbr" event={"ID":"11106d86-1e86-47cf-907d-9fb690a4f56e","Type":"ContainerStarted","Data":"dacab8e31cfe41128b6e4565c264fda79a25f1522efdfec0b619314c34ba229c"} Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.340168 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.343568 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" event={"ID":"9d8dae22-5db2-4c03-b8e8-f20c3a911957","Type":"ContainerStarted","Data":"d9a541a05ae81a49c8d35c777ca160169a33c8f2589074f3eb78a8ef4a34b3cc"} Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.343630 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" event={"ID":"9d8dae22-5db2-4c03-b8e8-f20c3a911957","Type":"ContainerStarted","Data":"60b56e9cdc8085e9c2e61654e8673374be62d8c63722262784843010b10420e7"} Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.345285 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" event={"ID":"bc208e27-2a4a-49a5-b3b4-1880249b93ed","Type":"ContainerStarted","Data":"1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21"} Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.345303 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" event={"ID":"bc208e27-2a4a-49a5-b3b4-1880249b93ed","Type":"ContainerStarted","Data":"bf5336230ee124a20173854b2b3922fe2bc6de9531f69455b3e57b246100b79f"} Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.345597 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.354906 4787 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7fgjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.355024 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" podUID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.360496 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.380602 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.399257 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.400586 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.420411 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.434241 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.440474 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.444636 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:18 crc kubenswrapper[4787]: E0126 17:46:18.445866 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:48:20.445833657 +0000 UTC m=+269.152969790 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.446193 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.453894 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.458846 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-48jfn"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.462846 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 17:46:18 crc kubenswrapper[4787]: W0126 17:46:18.490644 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc5e8dd5_46e7_4849_b278_d1397195e659.slice/crio-038c9aefcda7e9363b3e98b55532f0b55416591500402b8bf0a2eb76db8bb055 WatchSource:0}: Error finding container 038c9aefcda7e9363b3e98b55532f0b55416591500402b8bf0a2eb76db8bb055: Status 404 returned error can't find the container with id 038c9aefcda7e9363b3e98b55532f0b55416591500402b8bf0a2eb76db8bb055 Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.501060 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.519844 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.522436 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.539730 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.547025 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.547103 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.547149 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.548078 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.551989 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.552033 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.561508 4787 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.580580 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.616503 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.643073 4787 request.go:700] Waited for 1.923937439s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.654895 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92fp7\" (UniqueName: \"kubernetes.io/projected/e3951631-385f-42ea-8f84-0f208fc807b5-kube-api-access-92fp7\") pod \"openshift-config-operator-7777fb866f-8l69c\" (UID: \"e3951631-385f-42ea-8f84-0f208fc807b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.660624 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqql\" (UniqueName: \"kubernetes.io/projected/144a9368-752c-404b-9881-4a03a930b77a-kube-api-access-hdqql\") pod \"apiserver-76f77b778f-9fdv8\" (UID: \"144a9368-752c-404b-9881-4a03a930b77a\") " pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.669352 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-n27mc"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.680082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqpfs\" (UniqueName: \"kubernetes.io/projected/eef0b608-0578-4c92-92a2-ab1f6aa787bf-kube-api-access-mqpfs\") pod \"etcd-operator-b45778765-q2957\" (UID: \"eef0b608-0578-4c92-92a2-ab1f6aa787bf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:18 crc kubenswrapper[4787]: W0126 17:46:18.685862 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0101e33c_84b1_4777_b381_345e7fa3397b.slice/crio-3068ac17401e7f4fd660249210213d67432816ede8e98fd3ef0d4cde8a366a8a WatchSource:0}: Error finding container 3068ac17401e7f4fd660249210213d67432816ede8e98fd3ef0d4cde8a366a8a: Status 404 returned error can't find the container with id 3068ac17401e7f4fd660249210213d67432816ede8e98fd3ef0d4cde8a366a8a Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.697418 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv9z9\" (UniqueName: \"kubernetes.io/projected/7ae4d848-8522-4dbf-bc18-5292c04f6a38-kube-api-access-bv9z9\") pod \"router-default-5444994796-gnbmj\" (UID: \"7ae4d848-8522-4dbf-bc18-5292c04f6a38\") " pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.715702 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqkn\" (UniqueName: \"kubernetes.io/projected/0f3bf44f-58af-47b2-b981-8c1806ce06c5-kube-api-access-tdqkn\") pod \"dns-operator-744455d44c-7frg7\" (UID: \"0f3bf44f-58af-47b2-b981-8c1806ce06c5\") " pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.722306 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv"] Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.744457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5e5aec0-fe65-4fc3-8254-4f288ba0692b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t49r7\" (UID: \"e5e5aec0-fe65-4fc3-8254-4f288ba0692b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.755732 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4drf\" (UniqueName: \"kubernetes.io/projected/f5c13435-6ec0-4a59-b905-7c4ca0d5978f-kube-api-access-l4drf\") pod \"downloads-7954f5f757-qgmbn\" (UID: \"f5c13435-6ec0-4a59-b905-7c4ca0d5978f\") " pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.777509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50afe70f-eb72-473a-8c31-0841be85a3ca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.780160 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.790368 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.797626 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.800117 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jkb\" (UniqueName: \"kubernetes.io/projected/3ec03f76-9431-4d70-84aa-c1073d8d5e44-kube-api-access-45jkb\") pod \"route-controller-manager-6576b87f9c-kblzn\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.804581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.809112 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.818618 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.818758 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a662548-4579-4ff8-ae50-97fbc67542a8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8hdx\" (UID: \"1a662548-4579-4ff8-ae50-97fbc67542a8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.822415 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.828547 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.837285 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.843174 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.845361 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khmwx\" (UniqueName: \"kubernetes.io/projected/4dad5a6b-7a38-44ac-938b-f0125ca82924-kube-api-access-khmwx\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.850830 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.858161 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.873376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4dad5a6b-7a38-44ac-938b-f0125ca82924-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vsn6b\" (UID: \"4dad5a6b-7a38-44ac-938b-f0125ca82924\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.876710 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2rw\" (UniqueName: \"kubernetes.io/projected/1b873062-3dbd-40cb-92f9-cc0fbfd98f2b-kube-api-access-jr2rw\") pod \"machine-api-operator-5694c8668f-hxjzt\" (UID: \"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.878209 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.906569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgtr\" (UniqueName: \"kubernetes.io/projected/60f15cf8-bef6-4663-80f3-882d0cb9c415-kube-api-access-dhgtr\") pod \"openshift-controller-manager-operator-756b6f6bc6-td5kl\" (UID: \"60f15cf8-bef6-4663-80f3-882d0cb9c415\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.928933 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.936592 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg97h\" (UniqueName: \"kubernetes.io/projected/50afe70f-eb72-473a-8c31-0841be85a3ca-kube-api-access-tg97h\") pod \"cluster-image-registry-operator-dc59b4c8b-c6cql\" (UID: \"50afe70f-eb72-473a-8c31-0841be85a3ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.940024 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.960609 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 17:46:18 crc kubenswrapper[4787]: I0126 17:46:18.980546 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c488c5cb-fbf3-4ca4-9ef7-3e171e36b302-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bm7j\" (UID: \"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060606 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060658 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx76q\" (UniqueName: \"kubernetes.io/projected/c488c5cb-fbf3-4ca4-9ef7-3e171e36b302-kube-api-access-dx76q\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bm7j\" (UID: \"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-kube-api-access-b2dvz\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060706 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060728 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060746 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-certificates\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060790 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-bound-sa-token\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.060809 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-tls\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.061679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-trusted-ca\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.062757 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:19.562741392 +0000 UTC m=+148.269877515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.138088 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.162535 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.162918 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6334d21e-4274-4a0f-b03f-a58771653391-cert\") pod \"ingress-canary-nmw59\" (UID: \"6334d21e-4274-4a0f-b03f-a58771653391\") " pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.162971 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c53622-6338-4176-b447-716024375a8a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163000 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca65646b-44bb-497c-a8b7-140264327c49-srv-cert\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163021 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e28a309-ee20-408a-b0a1-d1c457139803-config-volume\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e5052f-785d-4ca1-897c-f317e7a728e0-config-volume\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163148 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163172 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765bs\" (UniqueName: \"kubernetes.io/projected/2e28a309-ee20-408a-b0a1-d1c457139803-kube-api-access-765bs\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163238 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jj65\" (UniqueName: \"kubernetes.io/projected/02732137-9e5f-4a66-bde9-c3e6a299412c-kube-api-access-5jj65\") pod \"multus-admission-controller-857f4d67dd-fjcmx\" (UID: \"02732137-9e5f-4a66-bde9-c3e6a299412c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163289 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6ac8b3d3-8474-4e6b-b297-c18a5addf978-signing-key\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163316 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx76q\" (UniqueName: \"kubernetes.io/projected/c488c5cb-fbf3-4ca4-9ef7-3e171e36b302-kube-api-access-dx76q\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bm7j\" (UID: \"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163363 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-proxy-tls\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163391 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcc8g\" (UniqueName: \"kubernetes.io/projected/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-kube-api-access-mcc8g\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-plugins-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163498 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163540 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c53622-6338-4176-b447-716024375a8a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163613 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvkp\" (UniqueName: \"kubernetes.io/projected/ca65646b-44bb-497c-a8b7-140264327c49-kube-api-access-8vvkp\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6394eb5f-f1da-486a-93ec-845b88c8b85b-tmpfs\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163776 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163804 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bdd2a89-6484-4f2f-8c16-42deccae45da-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163848 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6394eb5f-f1da-486a-93ec-845b88c8b85b-webhook-cert\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163874 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-mountpoint-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.163896 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6394eb5f-f1da-486a-93ec-845b88c8b85b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.164027 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8bdd2a89-6484-4f2f-8c16-42deccae45da-images\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.164054 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36e5052f-785d-4ca1-897c-f317e7a728e0-metrics-tls\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.164098 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.164140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-tls\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.164186 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aac1329d-865c-4d14-9d66-6fbb4542afba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2kj2\" (UID: \"aac1329d-865c-4d14-9d66-6fbb4542afba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.164216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.170548 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.171005 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:19.670978664 +0000 UTC m=+148.378114807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.171281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.176294 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183057 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e28a309-ee20-408a-b0a1-d1c457139803-secret-volume\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183112 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca65646b-44bb-497c-a8b7-140264327c49-profile-collector-cert\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-csi-data-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjkc\" (UniqueName: \"kubernetes.io/projected/18851ff6-442b-4a22-97c1-ceeef57c6c20-kube-api-access-6fjkc\") pod \"migrator-59844c95c7-qhj6c\" (UID: \"18851ff6-442b-4a22-97c1-ceeef57c6c20\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183252 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttnk\" (UniqueName: \"kubernetes.io/projected/cc279a24-e1ee-4880-8153-c0695b1762df-kube-api-access-9ttnk\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183302 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lp2\" (UniqueName: \"kubernetes.io/projected/6334d21e-4274-4a0f-b03f-a58771653391-kube-api-access-f2lp2\") pod \"ingress-canary-nmw59\" (UID: \"6334d21e-4274-4a0f-b03f-a58771653391\") " pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdwz6\" (UniqueName: \"kubernetes.io/projected/6ac8b3d3-8474-4e6b-b297-c18a5addf978-kube-api-access-gdwz6\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlnm\" (UniqueName: \"kubernetes.io/projected/6394eb5f-f1da-486a-93ec-845b88c8b85b-kube-api-access-wzlnm\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183461 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqgs\" (UniqueName: \"kubernetes.io/projected/00be38bb-add0-4e45-9412-e9169ee8c3dc-kube-api-access-spqgs\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183530 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-serving-cert\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183562 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02732137-9e5f-4a66-bde9-c3e6a299412c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fjcmx\" (UID: \"02732137-9e5f-4a66-bde9-c3e6a299412c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183595 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6ac8b3d3-8474-4e6b-b297-c18a5addf978-signing-cabundle\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183615 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqmx\" (UniqueName: \"kubernetes.io/projected/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-kube-api-access-xvqmx\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183670 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-trusted-ca\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183700 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c488c5cb-fbf3-4ca4-9ef7-3e171e36b302-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bm7j\" (UID: \"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183768 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-registration-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183795 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bdd2a89-6484-4f2f-8c16-42deccae45da-proxy-tls\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183845 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5f8\" (UniqueName: \"kubernetes.io/projected/36e5052f-785d-4ca1-897c-f317e7a728e0-kube-api-access-zs5f8\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183885 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-kube-api-access-b2dvz\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.183905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt27\" (UniqueName: \"kubernetes.io/projected/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-kube-api-access-8rt27\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-certificates\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184026 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-config\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-bound-sa-token\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcws\" (UniqueName: \"kubernetes.io/projected/8bdd2a89-6484-4f2f-8c16-42deccae45da-kube-api-access-8pcws\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184220 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwn4n\" (UniqueName: \"kubernetes.io/projected/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-kube-api-access-cwn4n\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184237 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc279a24-e1ee-4880-8153-c0695b1762df-node-bootstrap-token\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184267 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7x99\" (UniqueName: \"kubernetes.io/projected/c6c53622-6338-4176-b447-716024375a8a-kube-api-access-w7x99\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184297 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-socket-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184370 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-config\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184392 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc279a24-e1ee-4880-8153-c0695b1762df-certs\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184415 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwxj\" (UniqueName: \"kubernetes.io/projected/aac1329d-865c-4d14-9d66-6fbb4542afba-kube-api-access-mtwxj\") pod \"package-server-manager-789f6589d5-w2kj2\" (UID: \"aac1329d-865c-4d14-9d66-6fbb4542afba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184469 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184504 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.184537 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-srv-cert\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.198548 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-tls\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.203920 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-trusted-ca\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.206760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c488c5cb-fbf3-4ca4-9ef7-3e171e36b302-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bm7j\" (UID: \"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.208417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-certificates\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.211879 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.248212 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx76q\" (UniqueName: \"kubernetes.io/projected/c488c5cb-fbf3-4ca4-9ef7-3e171e36b302-kube-api-access-dx76q\") pod \"control-plane-machine-set-operator-78cbb6b69f-5bm7j\" (UID: \"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.260096 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qgmbn"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.283481 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-kube-api-access-b2dvz\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285612 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvqmx\" (UniqueName: \"kubernetes.io/projected/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-kube-api-access-xvqmx\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285655 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-registration-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285677 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bdd2a89-6484-4f2f-8c16-42deccae45da-proxy-tls\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285705 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5f8\" (UniqueName: \"kubernetes.io/projected/36e5052f-785d-4ca1-897c-f317e7a728e0-kube-api-access-zs5f8\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt27\" (UniqueName: \"kubernetes.io/projected/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-kube-api-access-8rt27\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285763 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-config\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285797 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pcws\" (UniqueName: \"kubernetes.io/projected/8bdd2a89-6484-4f2f-8c16-42deccae45da-kube-api-access-8pcws\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwn4n\" (UniqueName: \"kubernetes.io/projected/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-kube-api-access-cwn4n\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285842 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc279a24-e1ee-4880-8153-c0695b1762df-node-bootstrap-token\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285865 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7x99\" (UniqueName: \"kubernetes.io/projected/c6c53622-6338-4176-b447-716024375a8a-kube-api-access-w7x99\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285888 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-socket-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-config\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285930 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc279a24-e1ee-4880-8153-c0695b1762df-certs\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285973 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwxj\" (UniqueName: \"kubernetes.io/projected/aac1329d-865c-4d14-9d66-6fbb4542afba-kube-api-access-mtwxj\") pod \"package-server-manager-789f6589d5-w2kj2\" (UID: \"aac1329d-865c-4d14-9d66-6fbb4542afba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.285992 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-registration-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286000 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286059 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-srv-cert\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286079 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6334d21e-4274-4a0f-b03f-a58771653391-cert\") pod \"ingress-canary-nmw59\" (UID: \"6334d21e-4274-4a0f-b03f-a58771653391\") " pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c53622-6338-4176-b447-716024375a8a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286133 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca65646b-44bb-497c-a8b7-140264327c49-srv-cert\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286154 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e28a309-ee20-408a-b0a1-d1c457139803-config-volume\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286173 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e5052f-785d-4ca1-897c-f317e7a728e0-config-volume\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286194 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-765bs\" (UniqueName: \"kubernetes.io/projected/2e28a309-ee20-408a-b0a1-d1c457139803-kube-api-access-765bs\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jj65\" (UniqueName: \"kubernetes.io/projected/02732137-9e5f-4a66-bde9-c3e6a299412c-kube-api-access-5jj65\") pod \"multus-admission-controller-857f4d67dd-fjcmx\" (UID: \"02732137-9e5f-4a66-bde9-c3e6a299412c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286236 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6ac8b3d3-8474-4e6b-b297-c18a5addf978-signing-key\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286253 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-proxy-tls\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286270 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcc8g\" (UniqueName: \"kubernetes.io/projected/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-kube-api-access-mcc8g\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286316 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c53622-6338-4176-b447-716024375a8a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286333 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-plugins-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286356 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvkp\" (UniqueName: \"kubernetes.io/projected/ca65646b-44bb-497c-a8b7-140264327c49-kube-api-access-8vvkp\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6394eb5f-f1da-486a-93ec-845b88c8b85b-tmpfs\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bdd2a89-6484-4f2f-8c16-42deccae45da-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.311716 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-config\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.327084 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bdd2a89-6484-4f2f-8c16-42deccae45da-proxy-tls\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.327345 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-socket-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.327887 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-config\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.333856 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.286423 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6394eb5f-f1da-486a-93ec-845b88c8b85b-webhook-cert\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.334916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-mountpoint-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.334988 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6394eb5f-f1da-486a-93ec-845b88c8b85b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335056 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8bdd2a89-6484-4f2f-8c16-42deccae45da-images\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335088 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36e5052f-785d-4ca1-897c-f317e7a728e0-metrics-tls\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335119 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335200 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aac1329d-865c-4d14-9d66-6fbb4542afba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2kj2\" (UID: \"aac1329d-865c-4d14-9d66-6fbb4542afba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335235 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e28a309-ee20-408a-b0a1-d1c457139803-secret-volume\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335261 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca65646b-44bb-497c-a8b7-140264327c49-profile-collector-cert\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-csi-data-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fjkc\" (UniqueName: \"kubernetes.io/projected/18851ff6-442b-4a22-97c1-ceeef57c6c20-kube-api-access-6fjkc\") pod \"migrator-59844c95c7-qhj6c\" (UID: \"18851ff6-442b-4a22-97c1-ceeef57c6c20\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttnk\" (UniqueName: \"kubernetes.io/projected/cc279a24-e1ee-4880-8153-c0695b1762df-kube-api-access-9ttnk\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335457 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lp2\" (UniqueName: \"kubernetes.io/projected/6334d21e-4274-4a0f-b03f-a58771653391-kube-api-access-f2lp2\") pod \"ingress-canary-nmw59\" (UID: \"6334d21e-4274-4a0f-b03f-a58771653391\") " pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335487 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spqgs\" (UniqueName: \"kubernetes.io/projected/00be38bb-add0-4e45-9412-e9169ee8c3dc-kube-api-access-spqgs\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdwz6\" (UniqueName: \"kubernetes.io/projected/6ac8b3d3-8474-4e6b-b297-c18a5addf978-kube-api-access-gdwz6\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335548 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnm\" (UniqueName: \"kubernetes.io/projected/6394eb5f-f1da-486a-93ec-845b88c8b85b-kube-api-access-wzlnm\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335587 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335613 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-serving-cert\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335643 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02732137-9e5f-4a66-bde9-c3e6a299412c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fjcmx\" (UID: \"02732137-9e5f-4a66-bde9-c3e6a299412c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.335673 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6ac8b3d3-8474-4e6b-b297-c18a5addf978-signing-cabundle\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.336401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-bound-sa-token\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.336537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-mountpoint-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.337076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6ac8b3d3-8474-4e6b-b297-c18a5addf978-signing-cabundle\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.337602 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8bdd2a89-6484-4f2f-8c16-42deccae45da-images\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.340815 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6394eb5f-f1da-486a-93ec-845b88c8b85b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.342169 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-plugins-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.344054 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.348966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc279a24-e1ee-4880-8153-c0695b1762df-certs\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.349810 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc279a24-e1ee-4880-8153-c0695b1762df-node-bootstrap-token\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.349874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca65646b-44bb-497c-a8b7-140264327c49-srv-cert\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.360552 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36e5052f-785d-4ca1-897c-f317e7a728e0-metrics-tls\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.362986 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.363560 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6ac8b3d3-8474-4e6b-b297-c18a5addf978-signing-key\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.363828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6394eb5f-f1da-486a-93ec-845b88c8b85b-webhook-cert\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.366209 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-srv-cert\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.366673 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.369162 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt27\" (UniqueName: \"kubernetes.io/projected/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-kube-api-access-8rt27\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.381242 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9fdv8"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.381571 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc-csi-data-dir\") pod \"csi-hostpathplugin-phjkm\" (UID: \"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc\") " pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.383974 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e28a309-ee20-408a-b0a1-d1c457139803-config-volume\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.384712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e5052f-785d-4ca1-897c-f317e7a728e0-config-volume\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.384887 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c53622-6338-4176-b447-716024375a8a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.385039 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:19.885020721 +0000 UTC m=+148.592156854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.385578 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.385645 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bdd2a89-6484-4f2f-8c16-42deccae45da-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.390647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6394eb5f-f1da-486a-93ec-845b88c8b85b-tmpfs\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.395396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvqmx\" (UniqueName: \"kubernetes.io/projected/62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89-kube-api-access-xvqmx\") pod \"olm-operator-6b444d44fb-p69cl\" (UID: \"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.421430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e28a309-ee20-408a-b0a1-d1c457139803-secret-volume\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.423931 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pcws\" (UniqueName: \"kubernetes.io/projected/8bdd2a89-6484-4f2f-8c16-42deccae45da-kube-api-access-8pcws\") pod \"machine-config-operator-74547568cd-2v4tf\" (UID: \"8bdd2a89-6484-4f2f-8c16-42deccae45da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.425401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6334d21e-4274-4a0f-b03f-a58771653391-cert\") pod \"ingress-canary-nmw59\" (UID: \"6334d21e-4274-4a0f-b03f-a58771653391\") " pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.425828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c53622-6338-4176-b447-716024375a8a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.431467 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwn4n\" (UniqueName: \"kubernetes.io/projected/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-kube-api-access-cwn4n\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.432196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aac1329d-865c-4d14-9d66-6fbb4542afba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2kj2\" (UID: \"aac1329d-865c-4d14-9d66-6fbb4542afba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.432736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-proxy-tls\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.435904 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" event={"ID":"fc5e8dd5-46e7-4849-b278-d1397195e659","Type":"ContainerStarted","Data":"478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.436011 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" event={"ID":"fc5e8dd5-46e7-4849-b278-d1397195e659","Type":"ContainerStarted","Data":"038c9aefcda7e9363b3e98b55532f0b55416591500402b8bf0a2eb76db8bb055"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.437599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.437881 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.437804 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:19.937783554 +0000 UTC m=+148.644919687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.438364 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.438705 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:19.938696249 +0000 UTC m=+148.645832382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.439447 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5f8\" (UniqueName: \"kubernetes.io/projected/36e5052f-785d-4ca1-897c-f317e7a728e0-kube-api-access-zs5f8\") pod \"dns-default-xxrkw\" (UID: \"36e5052f-785d-4ca1-897c-f317e7a728e0\") " pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.439891 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a9d439b-de27-49e2-b4ea-fe5dfd3b1925-serving-cert\") pod \"service-ca-operator-777779d784-ltk52\" (UID: \"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.441410 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwxj\" (UniqueName: \"kubernetes.io/projected/aac1329d-865c-4d14-9d66-6fbb4542afba-kube-api-access-mtwxj\") pod \"package-server-manager-789f6589d5-w2kj2\" (UID: \"aac1329d-865c-4d14-9d66-6fbb4542afba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.443616 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02732137-9e5f-4a66-bde9-c3e6a299412c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fjcmx\" (UID: \"02732137-9e5f-4a66-bde9-c3e6a299412c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.453026 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b84b9f3-fe68-41db-baba-bee06c16d520" containerID="69436f58b1276be33146b5477b3a1af2fb2bb1711036142c8e126e7bfa60dec3" exitCode=0 Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.453077 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" event={"ID":"0b84b9f3-fe68-41db-baba-bee06c16d520","Type":"ContainerDied","Data":"69436f58b1276be33146b5477b3a1af2fb2bb1711036142c8e126e7bfa60dec3"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.454645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gndbr" event={"ID":"11106d86-1e86-47cf-907d-9fb690a4f56e","Type":"ContainerStarted","Data":"f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.456834 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" event={"ID":"9d8dae22-5db2-4c03-b8e8-f20c3a911957","Type":"ContainerStarted","Data":"cf9e69dae35c7e14e0e5c44c94ee2038136035374eec07685bb21f8bd0081b3e"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.458150 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7x99\" (UniqueName: \"kubernetes.io/projected/c6c53622-6338-4176-b447-716024375a8a-kube-api-access-w7x99\") pod \"kube-storage-version-migrator-operator-b67b599dd-fnrvk\" (UID: \"c6c53622-6338-4176-b447-716024375a8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.473562 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-khb2f" event={"ID":"704498ce-b71f-4fcb-b152-bdab533c235f","Type":"ContainerStarted","Data":"873e4315600631b684c480c5264669313bb5259b722c7ef294d40f4b70d66650"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.474809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-khb2f" event={"ID":"704498ce-b71f-4fcb-b152-bdab533c235f","Type":"ContainerStarted","Data":"1423061696a304c9f4326dc2877948102d646c50baeccf177753f6cc7eda0d4c"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.476902 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.479535 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" event={"ID":"c75e61af-58ba-4e57-b3af-2c90ad1e2502","Type":"ContainerStarted","Data":"73ea2aae658f31455ad1449fd0af0be0e371ce9e1dbff77b7abb2e5b350bddf1"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.479558 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" event={"ID":"c75e61af-58ba-4e57-b3af-2c90ad1e2502","Type":"ContainerStarted","Data":"363c37b5e58aa281f914d9772177106bc195bcb925f9926589ee4ea7c2eede9b"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.480682 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gnbmj" event={"ID":"7ae4d848-8522-4dbf-bc18-5292c04f6a38","Type":"ContainerStarted","Data":"d2a320785587f4f1b35d4e730eae683dedc262ee99c5f2455589d16274b952ee"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.480701 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gnbmj" event={"ID":"7ae4d848-8522-4dbf-bc18-5292c04f6a38","Type":"ContainerStarted","Data":"47166ea1cb22969b5a052ca9f8970ae66e615e7fae04f74ddf77353864d2efea"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.485563 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.492707 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" event={"ID":"d8c4a6bf-9404-4c15-acc5-9563d03b7c47","Type":"ContainerStarted","Data":"d8af18991eaa29f7710ba843813b78add9172d3a597ac7c71c097a9b18860a73"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.492746 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" event={"ID":"d8c4a6bf-9404-4c15-acc5-9563d03b7c47","Type":"ContainerStarted","Data":"32b89199de4b75372ca76c6fccaf887f2b24b4dd9efd0723cada9895fe95895b"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.494618 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qgmbn" event={"ID":"f5c13435-6ec0-4a59-b905-7c4ca0d5978f","Type":"ContainerStarted","Data":"0e39db09ecb2cb31bb3c9a52bb47a326b0b22c8e4b6f131d930aab90f1e0d0ba"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.499975 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttnk\" (UniqueName: \"kubernetes.io/projected/cc279a24-e1ee-4880-8153-c0695b1762df-kube-api-access-9ttnk\") pod \"machine-config-server-nln2k\" (UID: \"cc279a24-e1ee-4880-8153-c0695b1762df\") " pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.500210 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" event={"ID":"0101e33c-84b1-4777-b381-345e7fa3397b","Type":"ContainerStarted","Data":"9c71cd63b137991f488cf96bf877195e9d5271be1f9bb1940dbb28538bf1963f"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.500235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" event={"ID":"0101e33c-84b1-4777-b381-345e7fa3397b","Type":"ContainerStarted","Data":"3068ac17401e7f4fd660249210213d67432816ede8e98fd3ef0d4cde8a366a8a"} Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.500676 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.507357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcc8g\" (UniqueName: \"kubernetes.io/projected/e2e4e7ed-3b43-4a71-897a-b9a2187e6894-kube-api-access-mcc8g\") pod \"machine-config-controller-84d6567774-pmhxg\" (UID: \"e2e4e7ed-3b43-4a71-897a-b9a2187e6894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.511576 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.521043 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.527792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fjkc\" (UniqueName: \"kubernetes.io/projected/18851ff6-442b-4a22-97c1-ceeef57c6c20-kube-api-access-6fjkc\") pod \"migrator-59844c95c7-qhj6c\" (UID: \"18851ff6-442b-4a22-97c1-ceeef57c6c20\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.529463 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.542641 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.546842 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.045632186 +0000 UTC m=+148.752768379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.550768 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jj65\" (UniqueName: \"kubernetes.io/projected/02732137-9e5f-4a66-bde9-c3e6a299412c-kube-api-access-5jj65\") pod \"multus-admission-controller-857f4d67dd-fjcmx\" (UID: \"02732137-9e5f-4a66-bde9-c3e6a299412c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.552696 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.563445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lp2\" (UniqueName: \"kubernetes.io/projected/6334d21e-4274-4a0f-b03f-a58771653391-kube-api-access-f2lp2\") pod \"ingress-canary-nmw59\" (UID: \"6334d21e-4274-4a0f-b03f-a58771653391\") " pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.567409 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.568931 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8l69c"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.579564 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.595254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-765bs\" (UniqueName: \"kubernetes.io/projected/2e28a309-ee20-408a-b0a1-d1c457139803-kube-api-access-765bs\") pod \"collect-profiles-29490825-pdm74\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.596630 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvkp\" (UniqueName: \"kubernetes.io/projected/ca65646b-44bb-497c-a8b7-140264327c49-kube-api-access-8vvkp\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.602099 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.609380 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nln2k" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.628312 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.635983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nmw59" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.645105 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdwz6\" (UniqueName: \"kubernetes.io/projected/6ac8b3d3-8474-4e6b-b297-c18a5addf978-kube-api-access-gdwz6\") pod \"service-ca-9c57cc56f-cq2k4\" (UID: \"6ac8b3d3-8474-4e6b-b297-c18a5addf978\") " pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.651466 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.653469 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.153457196 +0000 UTC m=+148.860593329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.663022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q2957"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.670530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlnm\" (UniqueName: \"kubernetes.io/projected/6394eb5f-f1da-486a-93ec-845b88c8b85b-kube-api-access-wzlnm\") pod \"packageserver-d55dfcdfc-2qktf\" (UID: \"6394eb5f-f1da-486a-93ec-845b88c8b85b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.674073 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fdf9c5-ba56-4f29-9e0d-be1b6967498a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ddplg\" (UID: \"72fdf9c5-ba56-4f29-9e0d-be1b6967498a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.705139 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxjzt"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.714714 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.752531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.752711 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.252679557 +0000 UTC m=+148.959815700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.752770 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.753372 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.253363305 +0000 UTC m=+148.960499438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.758151 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.803499 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.806542 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.807246 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.812752 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.834655 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqgs\" (UniqueName: \"kubernetes.io/projected/00be38bb-add0-4e45-9412-e9169ee8c3dc-kube-api-access-spqgs\") pod \"marketplace-operator-79b997595-4mcn6\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.834903 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.853897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca65646b-44bb-497c-a8b7-140264327c49-profile-collector-cert\") pod \"catalog-operator-68c6474976-rwbgg\" (UID: \"ca65646b-44bb-497c-a8b7-140264327c49\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.854445 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.854783 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.354753347 +0000 UTC m=+149.061889480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.857135 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.872260 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.886769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.895019 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.927110 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-khb2f" Jan 26 17:46:19 crc kubenswrapper[4787]: W0126 17:46:19.934032 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5e5aec0_fe65_4fc3_8254_4f288ba0692b.slice/crio-67563337e72d4ba2db1ba015fb47a2ac683ca62593be9261fe0cedfb2bee9b34 WatchSource:0}: Error finding container 67563337e72d4ba2db1ba015fb47a2ac683ca62593be9261fe0cedfb2bee9b34: Status 404 returned error can't find the container with id 67563337e72d4ba2db1ba015fb47a2ac683ca62593be9261fe0cedfb2bee9b34 Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.956677 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn"] Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.961185 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:19 crc kubenswrapper[4787]: E0126 17:46:19.962018 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.462001441 +0000 UTC m=+149.169137574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:19 crc kubenswrapper[4787]: I0126 17:46:19.986335 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7frg7"] Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.010047 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql"] Jan 26 17:46:20 crc kubenswrapper[4787]: W0126 17:46:20.019871 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec03f76_9431_4d70_84aa_c1073d8d5e44.slice/crio-5a5f92c307d80c71dedde26774bd476f3addd6313d46fe6c54e76c494e994bfe WatchSource:0}: Error finding container 5a5f92c307d80c71dedde26774bd476f3addd6313d46fe6c54e76c494e994bfe: Status 404 returned error can't find the container with id 5a5f92c307d80c71dedde26774bd476f3addd6313d46fe6c54e76c494e994bfe Jan 26 17:46:20 crc kubenswrapper[4787]: W0126 17:46:20.048882 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50afe70f_eb72_473a_8c31_0841be85a3ca.slice/crio-b2d7d3003bd22a6d7a417f87e6754f3a31ec47f2f851ce75cd198d80b80d8d28 WatchSource:0}: Error finding container b2d7d3003bd22a6d7a417f87e6754f3a31ec47f2f851ce75cd198d80b80d8d28: Status 404 returned error can't find the container with id b2d7d3003bd22a6d7a417f87e6754f3a31ec47f2f851ce75cd198d80b80d8d28 Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.067810 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.068302 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.568283019 +0000 UTC m=+149.275419152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.145262 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.169584 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.170448 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.670436671 +0000 UTC m=+149.377572804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.226408 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" podStartSLOduration=129.226391934 podStartE2EDuration="2m9.226391934s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:20.225377895 +0000 UTC m=+148.932514028" watchObservedRunningTime="2026-01-26 17:46:20.226391934 +0000 UTC m=+148.933528067" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.272717 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.273300 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.773281373 +0000 UTC m=+149.480417506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.332672 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-khb2f" podStartSLOduration=129.332654441 podStartE2EDuration="2m9.332654441s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:20.33192859 +0000 UTC m=+149.039064723" watchObservedRunningTime="2026-01-26 17:46:20.332654441 +0000 UTC m=+149.039790574" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.374852 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.375292 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.875268441 +0000 UTC m=+149.582404584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.438717 4787 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-48jfn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.438783 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.475486 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.475886 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:20.97586825 +0000 UTC m=+149.683004383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.477424 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qtc46" podStartSLOduration=129.477410303 podStartE2EDuration="2m9.477410303s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:20.431404838 +0000 UTC m=+149.138540971" watchObservedRunningTime="2026-01-26 17:46:20.477410303 +0000 UTC m=+149.184546436" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.477555 4787 csr.go:261] certificate signing request csr-smg2j is approved, waiting to be issued Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.491645 4787 csr.go:257] certificate signing request csr-smg2j is issued Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.514424 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-n27mc" podStartSLOduration=129.514407917 podStartE2EDuration="2m9.514407917s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:20.477384522 +0000 UTC m=+149.184520655" watchObservedRunningTime="2026-01-26 17:46:20.514407917 +0000 UTC m=+149.221544050" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.529891 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" event={"ID":"50afe70f-eb72-473a-8c31-0841be85a3ca","Type":"ContainerStarted","Data":"b2d7d3003bd22a6d7a417f87e6754f3a31ec47f2f851ce75cd198d80b80d8d28"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.536770 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" event={"ID":"4dad5a6b-7a38-44ac-938b-f0125ca82924","Type":"ContainerStarted","Data":"87f5a8646d4f600c7abb47249d484f054be31e4c3eb26f318fb4cce7f30427c8"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.552571 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" event={"ID":"1a662548-4579-4ff8-ae50-97fbc67542a8","Type":"ContainerStarted","Data":"da7a5ba134111acb5c1cb90316264e0ca3b5a09dbf9306de6715a816746d04b0"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.561245 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b2547e45843d7517e7f5ea8d7f51679b76f4cc7bcc2c4dc9f5997da03985f3f7"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.562927 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74"] Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.563075 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" event={"ID":"e5e5aec0-fe65-4fc3-8254-4f288ba0692b","Type":"ContainerStarted","Data":"67563337e72d4ba2db1ba015fb47a2ac683ca62593be9261fe0cedfb2bee9b34"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.576902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.577333 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.077319843 +0000 UTC m=+149.784455976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.580153 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qgmbn" event={"ID":"f5c13435-6ec0-4a59-b905-7c4ca0d5978f","Type":"ContainerStarted","Data":"29f62aae871a201e743dec11d60a56d51e7e4fdd20eaa7e4084c780cf5df464d"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.581367 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.591888 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4f3d138c3bfd6eddbfc4c724938c4dcc532183d555f7dc5da5968d52f15f3d7e"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.592417 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-qgmbn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.592458 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qgmbn" podUID="f5c13435-6ec0-4a59-b905-7c4ca0d5978f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.593174 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" event={"ID":"e3951631-385f-42ea-8f84-0f208fc807b5","Type":"ContainerStarted","Data":"c4ec690d23607a18065fd6f11962eb7aad198b54ed26dceb62c87e2c358576ba"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.593991 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" event={"ID":"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b","Type":"ContainerStarted","Data":"319cb72b4a54f1160b8d73800d9abd4000033e9bc67d04441fc0c830663951e2"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.595253 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" event={"ID":"0f3bf44f-58af-47b2-b981-8c1806ce06c5","Type":"ContainerStarted","Data":"a3e1741c84637216c1418cd7f2b9696b2dd91e3de55378924251661d8b6a8d18"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.605031 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" event={"ID":"d8c4a6bf-9404-4c15-acc5-9563d03b7c47","Type":"ContainerStarted","Data":"86b719d612040aa2a3a15211dae47e68b8aa232671c8cff9a9bf57f255145e75"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.610227 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-phjkm"] Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.629783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" event={"ID":"60f15cf8-bef6-4663-80f3-882d0cb9c415","Type":"ContainerStarted","Data":"89b3e9b80a4b50c079da1a009f783dac82eff01790b1df55ace07fc5b9375a0b"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.634375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" event={"ID":"144a9368-752c-404b-9881-4a03a930b77a","Type":"ContainerStarted","Data":"947433d5ee0727366ab5c845649585eef508ae7272ffc82a86afc9cc9094ca30"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.651673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" event={"ID":"eef0b608-0578-4c92-92a2-ab1f6aa787bf","Type":"ContainerStarted","Data":"e704bb532bae2acf484de93ed9962be3679870ea76276f45b617ba9beae4257a"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.669370 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ltk52"] Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.677481 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.678536 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.178519899 +0000 UTC m=+149.885656032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.712995 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c"] Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.728294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"086e7991a4d8c9d34ef4a89cdd797d7fc3244047903179fef91eb65b16ba8c32"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.770296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" event={"ID":"3ec03f76-9431-4d70-84aa-c1073d8d5e44","Type":"ContainerStarted","Data":"5a5f92c307d80c71dedde26774bd476f3addd6313d46fe6c54e76c494e994bfe"} Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.807827 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.817631 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.317609442 +0000 UTC m=+150.024745575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.819118 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-shkrx" podStartSLOduration=129.819093564 podStartE2EDuration="2m9.819093564s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:20.801613576 +0000 UTC m=+149.508749719" watchObservedRunningTime="2026-01-26 17:46:20.819093564 +0000 UTC m=+149.526229697" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.844873 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fjcmx"] Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.855127 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.871716 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:20 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:20 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:20 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.871759 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.910262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:20 crc kubenswrapper[4787]: E0126 17:46:20.911683 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.411661059 +0000 UTC m=+150.118797202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:20 crc kubenswrapper[4787]: I0126 17:46:20.980721 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg"] Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.007055 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gndbr" podStartSLOduration=130.007034833 podStartE2EDuration="2m10.007034833s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:21.006509717 +0000 UTC m=+149.713645860" watchObservedRunningTime="2026-01-26 17:46:21.007034833 +0000 UTC m=+149.714170966" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.013249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.013772 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.513727659 +0000 UTC m=+150.220863792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.113916 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.114408 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.614389889 +0000 UTC m=+150.321526022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.186781 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" podStartSLOduration=130.186758651 podStartE2EDuration="2m10.186758651s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:21.185187257 +0000 UTC m=+149.892323390" watchObservedRunningTime="2026-01-26 17:46:21.186758651 +0000 UTC m=+149.893894784" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.218034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.218404 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.718389944 +0000 UTC m=+150.425526077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: W0126 17:46:21.236699 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc279a24_e1ee_4880_8153_c0695b1762df.slice/crio-b1465bb47cfffafd96b15a07489802267ce8e8d236f6336685d1550d1748cb5e WatchSource:0}: Error finding container b1465bb47cfffafd96b15a07489802267ce8e8d236f6336685d1550d1748cb5e: Status 404 returned error can't find the container with id b1465bb47cfffafd96b15a07489802267ce8e8d236f6336685d1550d1748cb5e Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.321535 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.322056 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.822036838 +0000 UTC m=+150.529172971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.437192 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.437574 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:21.937561223 +0000 UTC m=+150.644697356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.465683 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w88rv" podStartSLOduration=130.465663868 podStartE2EDuration="2m10.465663868s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:21.417409991 +0000 UTC m=+150.124546124" watchObservedRunningTime="2026-01-26 17:46:21.465663868 +0000 UTC m=+150.172800011" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.492608 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 17:41:20 +0000 UTC, rotation deadline is 2026-10-29 13:36:53.153742171 +0000 UTC Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.492651 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6619h50m31.66109388s for next certificate rotation Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.508611 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qgmbn" podStartSLOduration=130.508588677 podStartE2EDuration="2m10.508588677s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:21.466134471 +0000 UTC m=+150.173270604" watchObservedRunningTime="2026-01-26 17:46:21.508588677 +0000 UTC m=+150.215724820" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.538730 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.539663 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.039630373 +0000 UTC m=+150.746766516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.644382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.644789 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.144760889 +0000 UTC m=+150.851897022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.668715 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gnbmj" podStartSLOduration=130.668692957 podStartE2EDuration="2m10.668692957s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:21.509536553 +0000 UTC m=+150.216672686" watchObservedRunningTime="2026-01-26 17:46:21.668692957 +0000 UTC m=+150.375829090" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.755547 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.255515101 +0000 UTC m=+150.962651244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.767162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.771995 4787 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-48jfn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.772359 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.791381 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.791884 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.291870576 +0000 UTC m=+150.999006709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.804132 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nln2k" event={"ID":"cc279a24-e1ee-4880-8153-c0695b1762df","Type":"ContainerStarted","Data":"b1465bb47cfffafd96b15a07489802267ce8e8d236f6336685d1550d1748cb5e"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.810740 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" event={"ID":"2e28a309-ee20-408a-b0a1-d1c457139803","Type":"ContainerStarted","Data":"5e0362f7fb500806ec38c17e4b443e3808d9a0578c48eff8c5ba5d664947c57b"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.858263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c09c0eb4ce07cca2d82f5758f07c339642d3830c5e536f02f17ff77a0905d3b2"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.859658 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.875838 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:21 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:21 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:21 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.875905 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.893608 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.894008 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.393989238 +0000 UTC m=+151.101125371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.902977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" event={"ID":"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925","Type":"ContainerStarted","Data":"2b723d3d88e79e0aa7330ee676e637d79edef42ba8142db60c79cf485c68158d"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.908059 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9f419b6b06e74e3a8a16a21f21ec757301bc3051ab6fe836d35d06017fb1ccf6"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.916866 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" event={"ID":"18851ff6-442b-4a22-97c1-ceeef57c6c20","Type":"ContainerStarted","Data":"f5f598f140227e9bcac432018c64277c46fda0372f59d13b0ed6cfefe9dec8b1"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.943122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" event={"ID":"72fdf9c5-ba56-4f29-9e0d-be1b6967498a","Type":"ContainerStarted","Data":"80eb87ba907ae90ca865f28e2abf923838b20dcc535dc45a9077672106a9a4d7"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.957156 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" event={"ID":"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc","Type":"ContainerStarted","Data":"66533b9777f4c7d052eded15c55ae2c3a194000a90eceb3c77b443349a707125"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.971094 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" event={"ID":"4dad5a6b-7a38-44ac-938b-f0125ca82924","Type":"ContainerStarted","Data":"318c0e8ad3b1fa58a7e413eae6afe20e446330543179e216bf5fb735eef65c78"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.992317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" event={"ID":"02732137-9e5f-4a66-bde9-c3e6a299412c","Type":"ContainerStarted","Data":"ac3ee094d9f92f6829ec591fde4f2d4089f49fec489cde5afba4e373b2e082de"} Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.993251 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-qgmbn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.993290 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qgmbn" podUID="f5c13435-6ec0-4a59-b905-7c4ca0d5978f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 17:46:21 crc kubenswrapper[4787]: I0126 17:46:21.995109 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:21 crc kubenswrapper[4787]: E0126 17:46:21.995883 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.495868583 +0000 UTC m=+151.203004716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.086605 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf"] Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.096608 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.596578125 +0000 UTC m=+151.303714268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.096862 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.099504 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.101401 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.601380678 +0000 UTC m=+151.308516811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.126241 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cq2k4"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.136451 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.144586 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mcn6"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.158426 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.170346 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.200685 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.200928 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.700863266 +0000 UTC m=+151.407999399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.201073 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.201457 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.701446423 +0000 UTC m=+151.408582566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: W0126 17:46:22.241215 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ac8b3d3_8474_4e6b_b297_c18a5addf978.slice/crio-686e694838776101a29a1ef4006e8cb7c607eab90fd9e19658a38fd63e41a9cb WatchSource:0}: Error finding container 686e694838776101a29a1ef4006e8cb7c607eab90fd9e19658a38fd63e41a9cb: Status 404 returned error can't find the container with id 686e694838776101a29a1ef4006e8cb7c607eab90fd9e19658a38fd63e41a9cb Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.261469 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nmw59"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.279664 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xxrkw"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.297302 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.304157 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.304537 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.80452169 +0000 UTC m=+151.511657823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: W0126 17:46:22.327334 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6334d21e_4274_4a0f_b03f_a58771653391.slice/crio-142359f46fdeea6236931d2389e4ce2164f71e19067568d772a443af6f608a1b WatchSource:0}: Error finding container 142359f46fdeea6236931d2389e4ce2164f71e19067568d772a443af6f608a1b: Status 404 returned error can't find the container with id 142359f46fdeea6236931d2389e4ce2164f71e19067568d772a443af6f608a1b Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.340873 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.357776 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg"] Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.405132 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.405613 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:22.905596232 +0000 UTC m=+151.612732365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.443487 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl"] Jan 26 17:46:22 crc kubenswrapper[4787]: W0126 17:46:22.477848 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e4e7ed_3b43_4a71_897a_b9a2187e6894.slice/crio-941c6e1676ea03ae9ed6d16d934e2af35732e1d4ee534b9ded57d4474ad27d76 WatchSource:0}: Error finding container 941c6e1676ea03ae9ed6d16d934e2af35732e1d4ee534b9ded57d4474ad27d76: Status 404 returned error can't find the container with id 941c6e1676ea03ae9ed6d16d934e2af35732e1d4ee534b9ded57d4474ad27d76 Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.505812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.506138 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.006084118 +0000 UTC m=+151.713220261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.506833 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.507416 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.007404775 +0000 UTC m=+151.714540978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.608629 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.609028 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.109009573 +0000 UTC m=+151.816145706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.711793 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.712239 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.212218144 +0000 UTC m=+151.919354277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.813150 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.813478 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.31343625 +0000 UTC m=+152.020572383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.813770 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.814114 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.314100238 +0000 UTC m=+152.021236371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.863548 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:22 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:22 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:22 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.863587 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:22 crc kubenswrapper[4787]: I0126 17:46:22.914592 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:22 crc kubenswrapper[4787]: E0126 17:46:22.915058 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.415040387 +0000 UTC m=+152.122176520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.002774 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" event={"ID":"1a662548-4579-4ff8-ae50-97fbc67542a8","Type":"ContainerStarted","Data":"d94bed6981e0f80c175e0b526fd72b3c6d0c6ef05b6c0a8a3500c7fa8e0890d4"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.008352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" event={"ID":"144a9368-752c-404b-9881-4a03a930b77a","Type":"ContainerStarted","Data":"9d494296c8005d425809c5f82d6978603f5753856119108c54651fb0859f09f8"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.010073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" event={"ID":"0b84b9f3-fe68-41db-baba-bee06c16d520","Type":"ContainerStarted","Data":"72c44200cba62f192b30073767e882e6eddb8e25a16a2164ae5cd09a6b8f67b8"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.011754 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" event={"ID":"8bdd2a89-6484-4f2f-8c16-42deccae45da","Type":"ContainerStarted","Data":"bc06403158a7fffcf03187493a3de8974ed0ea2cda2ee321d1b93938b8fd33a1"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.013030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nln2k" event={"ID":"cc279a24-e1ee-4880-8153-c0695b1762df","Type":"ContainerStarted","Data":"8dfe9bafbd75358703870c41a808f40ff4af64188df41a72e43c0345a6fa1793"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.014835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" event={"ID":"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc","Type":"ContainerStarted","Data":"b1794234e4d950214e94ed47bd894052947803b20e7ea834a0935391ad846e76"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.016892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.017300 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.517284772 +0000 UTC m=+152.224420905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.025268 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8hdx" podStartSLOduration=132.025253115 podStartE2EDuration="2m12.025253115s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.023760513 +0000 UTC m=+151.730896646" watchObservedRunningTime="2026-01-26 17:46:23.025253115 +0000 UTC m=+151.732389248" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.030988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xxrkw" event={"ID":"36e5052f-785d-4ca1-897c-f317e7a728e0","Type":"ContainerStarted","Data":"de39bbd31b9a35d1293566de4c035869b3c5ff2595cc12a351c4f1fceb600be4"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.041721 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" event={"ID":"eef0b608-0578-4c92-92a2-ab1f6aa787bf","Type":"ContainerStarted","Data":"34c461a3a1e76b977a27e584ebe87288129484939f1a8aa2d2428753f1685e75"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.051610 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nln2k" podStartSLOduration=7.05159246 podStartE2EDuration="7.05159246s" podCreationTimestamp="2026-01-26 17:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.046302742 +0000 UTC m=+151.753438885" watchObservedRunningTime="2026-01-26 17:46:23.05159246 +0000 UTC m=+151.758728593" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.059213 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" event={"ID":"4dad5a6b-7a38-44ac-938b-f0125ca82924","Type":"ContainerStarted","Data":"fcea98c97bb40866e95c182aba20cf0c763643597a65618b16bde5d8d698440d"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.087149 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" event={"ID":"72fdf9c5-ba56-4f29-9e0d-be1b6967498a","Type":"ContainerStarted","Data":"12c703e043bd81b8679c8190b2b24d327ae0487d98f1992da820ae4810bd219b"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.088514 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.088734 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.089178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" event={"ID":"00be38bb-add0-4e45-9412-e9169ee8c3dc","Type":"ContainerStarted","Data":"4371cbd56233e65b826e606df876224dcb87c9f6ac25abc520b94eb44b281e7c"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.095152 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" event={"ID":"18851ff6-442b-4a22-97c1-ceeef57c6c20","Type":"ContainerStarted","Data":"a8daf019614e576b2982aff8fb75f0e007477e01a1bc0f391de90c0384bd9ae2"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.102775 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" podStartSLOduration=131.102757678 podStartE2EDuration="2m11.102757678s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.074021846 +0000 UTC m=+151.781157989" watchObservedRunningTime="2026-01-26 17:46:23.102757678 +0000 UTC m=+151.809893811" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.103870 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vsn6b" podStartSLOduration=132.103857609 podStartE2EDuration="2m12.103857609s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.101083482 +0000 UTC m=+151.808219625" watchObservedRunningTime="2026-01-26 17:46:23.103857609 +0000 UTC m=+151.810993742" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.117494 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.118565 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.61854919 +0000 UTC m=+152.325685323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.146536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" event={"ID":"e2e4e7ed-3b43-4a71-897a-b9a2187e6894","Type":"ContainerStarted","Data":"941c6e1676ea03ae9ed6d16d934e2af35732e1d4ee534b9ded57d4474ad27d76"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.155296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" event={"ID":"0f3bf44f-58af-47b2-b981-8c1806ce06c5","Type":"ContainerStarted","Data":"bbf29c2dc646aa19a40a9813ca6924a96833cd3b46d314585ea2c7b3e283add4"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.166403 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-q2957" podStartSLOduration=132.166374495 podStartE2EDuration="2m12.166374495s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.165993865 +0000 UTC m=+151.873130008" watchObservedRunningTime="2026-01-26 17:46:23.166374495 +0000 UTC m=+151.873510628" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.239998 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.240422 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.740407313 +0000 UTC m=+152.447543446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.258397 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" event={"ID":"6394eb5f-f1da-486a-93ec-845b88c8b85b","Type":"ContainerStarted","Data":"bb13b985cd6c6b1a5dac2b98d49f2e5a0d2d7b3172d450ef2513b1853a149510"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.302180 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" event={"ID":"3ec03f76-9431-4d70-84aa-c1073d8d5e44","Type":"ContainerStarted","Data":"e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.303495 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.338283 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.340024 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.340567 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" podStartSLOduration=131.340544408 podStartE2EDuration="2m11.340544408s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.340290261 +0000 UTC m=+152.047426394" watchObservedRunningTime="2026-01-26 17:46:23.340544408 +0000 UTC m=+152.047680541" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.340802 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.342348 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.842330118 +0000 UTC m=+152.549466251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.367063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" event={"ID":"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b","Type":"ContainerStarted","Data":"8539d257580b14eeff94e95019afadb4f664f9f21bab7af32008179c0276f6ab"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.383017 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" event={"ID":"50afe70f-eb72-473a-8c31-0841be85a3ca","Type":"ContainerStarted","Data":"63ce344eded8d97c42afbc531d84e968a06201db9003e9ca16b204b92546c3cd"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.399722 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" event={"ID":"2e28a309-ee20-408a-b0a1-d1c457139803","Type":"ContainerStarted","Data":"f5fa366160dea108a479e35d70501195180c7b65c253f9f0ed485ff780a7f647"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.430218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" event={"ID":"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302","Type":"ContainerStarted","Data":"bb82911375e2169b6d8f22aed8cca6b61d3f95ef61efadb3c8165c137779f3d3"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.471545 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.473147 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:23.973135931 +0000 UTC m=+152.680272064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.478836 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" event={"ID":"aac1329d-865c-4d14-9d66-6fbb4542afba","Type":"ContainerStarted","Data":"9ce91068fb91505af48c61a034707edc98dafd408ed8d545e63d3fa22c8df33d"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.479521 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" podStartSLOduration=83.479497718 podStartE2EDuration="1m23.479497718s" podCreationTimestamp="2026-01-26 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.476528215 +0000 UTC m=+152.183664348" watchObservedRunningTime="2026-01-26 17:46:23.479497718 +0000 UTC m=+152.186633851" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.481662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" event={"ID":"02732137-9e5f-4a66-bde9-c3e6a299412c","Type":"ContainerStarted","Data":"2e255117e0f7fd924945b62d57027aecb7afaf4d72e4322c14b5b5d95186f060"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.553535 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c6cql" podStartSLOduration=132.553512695 podStartE2EDuration="2m12.553512695s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.529903235 +0000 UTC m=+152.237039368" watchObservedRunningTime="2026-01-26 17:46:23.553512695 +0000 UTC m=+152.260648828" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.554276 4787 generic.go:334] "Generic (PLEG): container finished" podID="e3951631-385f-42ea-8f84-0f208fc807b5" containerID="debf17481d28aa2f46c45f3334d3dcc186ab5760b5912a49a9bf398aac565d13" exitCode=0 Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.554396 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" event={"ID":"e3951631-385f-42ea-8f84-0f208fc807b5","Type":"ContainerDied","Data":"debf17481d28aa2f46c45f3334d3dcc186ab5760b5912a49a9bf398aac565d13"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.572932 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.576777 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.076743274 +0000 UTC m=+152.783879407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.684235 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.685145 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.18512953 +0000 UTC m=+152.892265663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.686471 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" event={"ID":"6ac8b3d3-8474-4e6b-b297-c18a5addf978","Type":"ContainerStarted","Data":"686e694838776101a29a1ef4006e8cb7c607eab90fd9e19658a38fd63e41a9cb"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.695526 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" event={"ID":"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89","Type":"ContainerStarted","Data":"b816fed7cfe3f24a4001f0c63a14912e89d7be951e511f9d02544b2b0bb12f43"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.733647 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" event={"ID":"e5e5aec0-fe65-4fc3-8254-4f288ba0692b","Type":"ContainerStarted","Data":"6724b3b04ff2cb56909bd1195b513a49197d7bbfed3cc6cc2f12af3aaf6894af"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.786911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.787287 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.287271982 +0000 UTC m=+152.994408105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.787351 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" event={"ID":"2a9d439b-de27-49e2-b4ea-fe5dfd3b1925","Type":"ContainerStarted","Data":"41a3653ae7b2f1bb40e66d664bcf9f86886fb03b284ac6f0fda70e9d51f5a603"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.789684 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t49r7" podStartSLOduration=132.789668289 podStartE2EDuration="2m12.789668289s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.788185348 +0000 UTC m=+152.495321491" watchObservedRunningTime="2026-01-26 17:46:23.789668289 +0000 UTC m=+152.496804422" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.806255 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nmw59" event={"ID":"6334d21e-4274-4a0f-b03f-a58771653391","Type":"ContainerStarted","Data":"142359f46fdeea6236931d2389e4ce2164f71e19067568d772a443af6f608a1b"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.834728 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" event={"ID":"ca65646b-44bb-497c-a8b7-140264327c49","Type":"ContainerStarted","Data":"b13f6b3a908824404a8e97daba89c1c0fdd311674f2d12b8f2cca825d28ccbde"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.836089 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.867544 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rwbgg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.867590 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" podUID="ca65646b-44bb-497c-a8b7-140264327c49" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.879214 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:23 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:23 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:23 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.879291 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.883807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8aa2fcdc3c051d1efd9808d1331fef0bea35ef83c21b31116cd8e4bb21196237"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.888680 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:23 crc kubenswrapper[4787]: E0126 17:46:23.890332 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.390316499 +0000 UTC m=+153.097452632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.895715 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ltk52" podStartSLOduration=131.895678909 podStartE2EDuration="2m11.895678909s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.844656904 +0000 UTC m=+152.551793037" watchObservedRunningTime="2026-01-26 17:46:23.895678909 +0000 UTC m=+152.602815042" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.896079 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nmw59" podStartSLOduration=7.89607319 podStartE2EDuration="7.89607319s" podCreationTimestamp="2026-01-26 17:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.892373876 +0000 UTC m=+152.599510009" watchObservedRunningTime="2026-01-26 17:46:23.89607319 +0000 UTC m=+152.603209323" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.903273 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" event={"ID":"60f15cf8-bef6-4663-80f3-882d0cb9c415","Type":"ContainerStarted","Data":"37bac0e28e9273fc9df7b13192dcb820220f345609aaea87897889624042b539"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.920926 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" event={"ID":"c6c53622-6338-4176-b447-716024375a8a","Type":"ContainerStarted","Data":"d1c60b66cb1dfe8eb67b664c905698bd3325eaa7d4052e5b710c077addc3b2ee"} Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.922008 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-qgmbn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.922036 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qgmbn" podUID="f5c13435-6ec0-4a59-b905-7c4ca0d5978f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.949079 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" podStartSLOduration=132.94905615 podStartE2EDuration="2m12.94905615s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:23.948604366 +0000 UTC m=+152.655740499" watchObservedRunningTime="2026-01-26 17:46:23.94905615 +0000 UTC m=+152.656192283" Jan 26 17:46:23 crc kubenswrapper[4787]: I0126 17:46:23.997408 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.000182 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.500151786 +0000 UTC m=+153.207287929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.041460 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td5kl" podStartSLOduration=133.041438879 podStartE2EDuration="2m13.041438879s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:24.040665517 +0000 UTC m=+152.747801650" watchObservedRunningTime="2026-01-26 17:46:24.041438879 +0000 UTC m=+152.748575022" Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.107728 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.110267 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.61024977 +0000 UTC m=+153.317385903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.218774 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.219207 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.719184882 +0000 UTC m=+153.426321035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.326520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.326867 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.826851418 +0000 UTC m=+153.533987551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.427152 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.427318 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.927285672 +0000 UTC m=+153.634421825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.427629 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.428026 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:24.928010823 +0000 UTC m=+153.635146956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.528287 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.528411 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.028388196 +0000 UTC m=+153.735524339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.528482 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.528851 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.028841248 +0000 UTC m=+153.735977381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.635359 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.635696 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.135677821 +0000 UTC m=+153.842813954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.736760 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.737153 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.237137664 +0000 UTC m=+153.944273797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.837665 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.837799 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.337778694 +0000 UTC m=+154.044914827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.837857 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.838235 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.338225167 +0000 UTC m=+154.045361300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.858054 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:24 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:24 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:24 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.858135 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.926974 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" event={"ID":"6394eb5f-f1da-486a-93ec-845b88c8b85b","Type":"ContainerStarted","Data":"99affec2e988d9c53bdcca0ba02f2be1b63f7d519868d4d21bc2ec6e6600c896"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.927741 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.929785 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" event={"ID":"18851ff6-442b-4a22-97c1-ceeef57c6c20","Type":"ContainerStarted","Data":"7e2eb1a97ad40a14dc99595dbf6b33c6b7dd7594505618600a3389a2e756d12a"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.931840 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" event={"ID":"e2e4e7ed-3b43-4a71-897a-b9a2187e6894","Type":"ContainerStarted","Data":"d920591914dbec4a66df18e1fd3445e1cd71302a7a609c226f4bc08959d722d6"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.931865 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" event={"ID":"e2e4e7ed-3b43-4a71-897a-b9a2187e6894","Type":"ContainerStarted","Data":"1dd1cc8c3a203fd252b38d80db64d832a053cd8c78c6913005f7220fab6ba94a"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.938910 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.939017 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.43899417 +0000 UTC m=+154.146130293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.939183 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:24 crc kubenswrapper[4787]: E0126 17:46:24.939494 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.439486864 +0000 UTC m=+154.146622997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.946316 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" event={"ID":"1b873062-3dbd-40cb-92f9-cc0fbfd98f2b","Type":"ContainerStarted","Data":"9349c7eb13ae13050dc648f8ba04f76aadee7fd8c47f4a498bc7b92fbce3ae41"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.957745 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" event={"ID":"0f3bf44f-58af-47b2-b981-8c1806ce06c5","Type":"ContainerStarted","Data":"078fc2360514de2c0b159871e27f38ca1a4a48cd36bedbd3e2af15b384ad04b6"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.964530 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" event={"ID":"6ac8b3d3-8474-4e6b-b297-c18a5addf978","Type":"ContainerStarted","Data":"60ebc6808d24e5db71af58ed56a6a21c46929ee70cc4b0b9c372528431bf045a"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.982927 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" podStartSLOduration=132.982909846 podStartE2EDuration="2m12.982909846s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:24.980521539 +0000 UTC m=+153.687657672" watchObservedRunningTime="2026-01-26 17:46:24.982909846 +0000 UTC m=+153.690045979" Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.986143 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" event={"ID":"8bdd2a89-6484-4f2f-8c16-42deccae45da","Type":"ContainerStarted","Data":"54a0fcaae28686fd39d86d16fa10fc82a50f4662fceaca53179c7ddb114c42b4"} Jan 26 17:46:24 crc kubenswrapper[4787]: I0126 17:46:24.986184 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" event={"ID":"8bdd2a89-6484-4f2f-8c16-42deccae45da","Type":"ContainerStarted","Data":"9d8dbe7ac3e3126fac6e51154e44d2b4b60c7306c4fe002b096dab2feb1dc8dc"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.005425 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" event={"ID":"02732137-9e5f-4a66-bde9-c3e6a299412c","Type":"ContainerStarted","Data":"1057d72625045e7b9ca34b332f7615653c1f7995881ad0e37c0fa7eda2188859"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.009609 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pmhxg" podStartSLOduration=134.009588241 podStartE2EDuration="2m14.009588241s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.007584765 +0000 UTC m=+153.714720918" watchObservedRunningTime="2026-01-26 17:46:25.009588241 +0000 UTC m=+153.716724374" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.017179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xxrkw" event={"ID":"36e5052f-785d-4ca1-897c-f317e7a728e0","Type":"ContainerStarted","Data":"fb6d3f241a7cae36b3f54d886ed17b3961c092d455a43da48de6a7cf74a356cb"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.017220 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xxrkw" event={"ID":"36e5052f-785d-4ca1-897c-f317e7a728e0","Type":"ContainerStarted","Data":"08a21436f38d191931dd0cafa72f053297292841e2257ffa9ed0bb31b55e3b19"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.017822 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.018970 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" event={"ID":"c6c53622-6338-4176-b447-716024375a8a","Type":"ContainerStarted","Data":"c75e362a602175f6e8cf7d78d6f7a87e35054ec740396980e97b1aef0ff5cd79"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.041519 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.041642 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.541612425 +0000 UTC m=+154.248748558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.041896 4787 generic.go:334] "Generic (PLEG): container finished" podID="144a9368-752c-404b-9881-4a03a930b77a" containerID="9d494296c8005d425809c5f82d6978603f5753856119108c54651fb0859f09f8" exitCode=0 Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.041997 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.042242 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" event={"ID":"144a9368-752c-404b-9881-4a03a930b77a","Type":"ContainerDied","Data":"9d494296c8005d425809c5f82d6978603f5753856119108c54651fb0859f09f8"} Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.044601 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.544587828 +0000 UTC m=+154.251723961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.093383 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cq2k4" podStartSLOduration=133.0933609 podStartE2EDuration="2m13.0933609s" podCreationTimestamp="2026-01-26 17:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.034521227 +0000 UTC m=+153.741657360" watchObservedRunningTime="2026-01-26 17:46:25.0933609 +0000 UTC m=+153.800497033" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.100669 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" event={"ID":"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc","Type":"ContainerStarted","Data":"8b2a590e75f026864791f43984985338218c5cf9b40c82d047a07ee41b9a4468"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.112431 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nmw59" event={"ID":"6334d21e-4274-4a0f-b03f-a58771653391","Type":"ContainerStarted","Data":"9b8cb55b0fa3d02aa8de0b5816102b0992aceda875281ef744019fc1c024faa1"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.127020 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" event={"ID":"e3951631-385f-42ea-8f84-0f208fc807b5","Type":"ContainerStarted","Data":"53d8ff7b5c6e8bd6767c6810ff40a21241610db465e50adc1eb05f95f70c789d"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.127265 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.142609 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" event={"ID":"62d7bd72-7a5f-45fa-9bf6-cf2bae12ca89","Type":"ContainerStarted","Data":"9eb00080d67720a4af8276e660506c119c80f3a5f6af4ac04794795c3a90b944"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.143463 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.153768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" event={"ID":"00be38bb-add0-4e45-9412-e9169ee8c3dc","Type":"ContainerStarted","Data":"72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.154752 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.156044 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4mcn6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.156086 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" podUID="00be38bb-add0-4e45-9412-e9169ee8c3dc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.156591 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.156667 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.656651697 +0000 UTC m=+154.363787820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.157524 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.180477 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7frg7" podStartSLOduration=134.180452922 podStartE2EDuration="2m14.180452922s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.100324405 +0000 UTC m=+153.807460538" watchObservedRunningTime="2026-01-26 17:46:25.180452922 +0000 UTC m=+153.887589055" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.182508 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.682496139 +0000 UTC m=+154.389632272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.199490 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.199648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" event={"ID":"c488c5cb-fbf3-4ca4-9ef7-3e171e36b302","Type":"ContainerStarted","Data":"cb73c1afb9cb5053804687b01427cbdd9f5fd38847bbd80755cc7c78cdb09dec"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.238049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" event={"ID":"aac1329d-865c-4d14-9d66-6fbb4542afba","Type":"ContainerStarted","Data":"de1051dbde0c2d4f0ab01600ad7557dfa59f077e35b389c6e12a0c5dbc4f8544"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.238116 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" event={"ID":"aac1329d-865c-4d14-9d66-6fbb4542afba","Type":"ContainerStarted","Data":"2f6313babba915bbd829076c6e073c7b93f287a8b1543ee418a278a5dd4cb7a3"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.238152 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjzt" podStartSLOduration=134.238141212 podStartE2EDuration="2m14.238141212s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.181226604 +0000 UTC m=+153.888362737" watchObservedRunningTime="2026-01-26 17:46:25.238141212 +0000 UTC m=+153.945277345" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.239098 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.258084 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" event={"ID":"ca65646b-44bb-497c-a8b7-140264327c49","Type":"ContainerStarted","Data":"3b9a343a874236ce02b7a0beba77fd4d25edae7a09a52e3a5d9ef2f6837b8b2c"} Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.266134 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.267738 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.767717689 +0000 UTC m=+154.474853832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.268463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.273360 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rwbgg" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.275085 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.775067564 +0000 UTC m=+154.482203717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.288411 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7djvj" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.292260 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qhj6c" podStartSLOduration=134.292239393 podStartE2EDuration="2m14.292239393s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.24376319 +0000 UTC m=+153.950899333" watchObservedRunningTime="2026-01-26 17:46:25.292239393 +0000 UTC m=+153.999375536" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.370726 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.371572 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.871541878 +0000 UTC m=+154.578678011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.371912 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.373155 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.873141752 +0000 UTC m=+154.580277885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.374535 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" podStartSLOduration=134.374520041 podStartE2EDuration="2m14.374520041s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.302264453 +0000 UTC m=+154.009400596" watchObservedRunningTime="2026-01-26 17:46:25.374520041 +0000 UTC m=+154.081656174" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.375502 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" podStartSLOduration=134.375496578 podStartE2EDuration="2m14.375496578s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.372727241 +0000 UTC m=+154.079863374" watchObservedRunningTime="2026-01-26 17:46:25.375496578 +0000 UTC m=+154.082632711" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.413383 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fnrvk" podStartSLOduration=134.413362875 podStartE2EDuration="2m14.413362875s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.412714537 +0000 UTC m=+154.119850680" watchObservedRunningTime="2026-01-26 17:46:25.413362875 +0000 UTC m=+154.120499018" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.445110 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" podStartSLOduration=134.445089521 podStartE2EDuration="2m14.445089521s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.44145619 +0000 UTC m=+154.148592323" watchObservedRunningTime="2026-01-26 17:46:25.445089521 +0000 UTC m=+154.152225654" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.474511 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.474911 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:25.974887593 +0000 UTC m=+154.682023736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.508857 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p69cl" podStartSLOduration=134.508837371 podStartE2EDuration="2m14.508837371s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.502865305 +0000 UTC m=+154.210001438" watchObservedRunningTime="2026-01-26 17:46:25.508837371 +0000 UTC m=+154.215973504" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.510980 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5bm7j" podStartSLOduration=134.51096785 podStartE2EDuration="2m14.51096785s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.475160801 +0000 UTC m=+154.182296944" watchObservedRunningTime="2026-01-26 17:46:25.51096785 +0000 UTC m=+154.218103983" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.533233 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjcmx" podStartSLOduration=134.533211371 podStartE2EDuration="2m14.533211371s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.531165665 +0000 UTC m=+154.238301808" watchObservedRunningTime="2026-01-26 17:46:25.533211371 +0000 UTC m=+154.240347504" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.575738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.576249 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.076234183 +0000 UTC m=+154.783370316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.601633 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2v4tf" podStartSLOduration=134.601610421 podStartE2EDuration="2m14.601610421s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.600576322 +0000 UTC m=+154.307712465" watchObservedRunningTime="2026-01-26 17:46:25.601610421 +0000 UTC m=+154.308746554" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.626965 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xxrkw" podStartSLOduration=9.626926577999999 podStartE2EDuration="9.626926578s" podCreationTimestamp="2026-01-26 17:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.624766268 +0000 UTC m=+154.331902421" watchObservedRunningTime="2026-01-26 17:46:25.626926578 +0000 UTC m=+154.334062711" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.676856 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.677128 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.177106449 +0000 UTC m=+154.884242592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.677241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.677583 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.177573342 +0000 UTC m=+154.884709475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.778665 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.778822 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.278801609 +0000 UTC m=+154.985937752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.778922 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.779248 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.279237171 +0000 UTC m=+154.986373314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.858165 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:25 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:25 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:25 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.858250 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.879968 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.880197 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.380166219 +0000 UTC m=+155.087302352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.880253 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.880636 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.380618801 +0000 UTC m=+155.087754934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.927832 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2qktf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.927908 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" podUID="6394eb5f-f1da-486a-93ec-845b88c8b85b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.981397 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.981614 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.48158649 +0000 UTC m=+155.188722623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:25 crc kubenswrapper[4787]: I0126 17:46:25.981660 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:25 crc kubenswrapper[4787]: E0126 17:46:25.981997 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.481989322 +0000 UTC m=+155.189125455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.009393 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ddplg" podStartSLOduration=135.009369036 podStartE2EDuration="2m15.009369036s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:25.739277965 +0000 UTC m=+154.446414098" watchObservedRunningTime="2026-01-26 17:46:26.009369036 +0000 UTC m=+154.716505179" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.010245 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rlvsg"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.011373 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.013022 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.081656 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rlvsg"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.082326 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.082509 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.582482399 +0000 UTC m=+155.289618532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.082596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.082884 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.582873109 +0000 UTC m=+155.290009242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.186541 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.186818 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsr2p\" (UniqueName: \"kubernetes.io/projected/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-kube-api-access-lsr2p\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.186941 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-utilities\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.186985 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-catalog-content\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.188323 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.688302713 +0000 UTC m=+155.395438856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.201904 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dhsdw"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.203113 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.212018 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.213256 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dhsdw"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.281775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" event={"ID":"144a9368-752c-404b-9881-4a03a930b77a","Type":"ContainerStarted","Data":"d4b5ad181d6fb64072d65d8baafe09dc5175d26ddb9893fd7bfd84da03f6a1aa"} Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.288063 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.288149 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-utilities\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.288174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-catalog-content\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.288211 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsr2p\" (UniqueName: \"kubernetes.io/projected/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-kube-api-access-lsr2p\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.289434 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-utilities\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.289462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-catalog-content\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.289610 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.789595751 +0000 UTC m=+155.496731954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.291260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" event={"ID":"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc","Type":"ContainerStarted","Data":"6208138fbeb37aa58fa069701e7d9f1821e4a29dc61958bdbad73a6a2a3fecee"} Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.309200 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qktf" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.345764 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.349736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsr2p\" (UniqueName: \"kubernetes.io/projected/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-kube-api-access-lsr2p\") pod \"community-operators-rlvsg\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.364006 4787 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.390829 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.391150 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-catalog-content\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.391177 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ddfw\" (UniqueName: \"kubernetes.io/projected/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-kube-api-access-4ddfw\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.391517 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-utilities\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.395318 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.895297352 +0000 UTC m=+155.602433485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.415845 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sw544"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.416820 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.434384 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw544"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.492668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxhs\" (UniqueName: \"kubernetes.io/projected/9e0a0430-30c3-40a8-aa3d-91c784c54e36-kube-api-access-8dxhs\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-utilities\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493092 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493157 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-utilities\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493180 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-catalog-content\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-catalog-content\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493226 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ddfw\" (UniqueName: \"kubernetes.io/projected/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-kube-api-access-4ddfw\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493609 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-utilities\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.493881 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:26.993865764 +0000 UTC m=+155.701001897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.493918 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-catalog-content\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.536098 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ddfw\" (UniqueName: \"kubernetes.io/projected/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-kube-api-access-4ddfw\") pod \"certified-operators-dhsdw\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.550590 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.593706 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.594169 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxhs\" (UniqueName: \"kubernetes.io/projected/9e0a0430-30c3-40a8-aa3d-91c784c54e36-kube-api-access-8dxhs\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.594336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-utilities\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.594447 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-catalog-content\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.595047 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-catalog-content\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.595215 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:46:27.095195894 +0000 UTC m=+155.802332037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.595865 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-utilities\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.602743 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bxkns"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.604207 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.626516 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxkns"] Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.627455 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.639818 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxhs\" (UniqueName: \"kubernetes.io/projected/9e0a0430-30c3-40a8-aa3d-91c784c54e36-kube-api-access-8dxhs\") pod \"community-operators-sw544\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.695787 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.695881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.695907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msv4z\" (UniqueName: \"kubernetes.io/projected/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-kube-api-access-msv4z\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.695942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-utilities\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: E0126 17:46:26.696374 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:46:27.196358549 +0000 UTC m=+155.903494692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b7v9x" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.757237 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.759305 4787 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T17:46:26.36403751Z","Handler":null,"Name":""} Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.778205 4787 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.778448 4787 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.796681 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.796929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.796976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msv4z\" (UniqueName: \"kubernetes.io/projected/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-kube-api-access-msv4z\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.797003 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-utilities\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.797828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-utilities\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.798217 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.823512 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msv4z\" (UniqueName: \"kubernetes.io/projected/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-kube-api-access-msv4z\") pod \"certified-operators-bxkns\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.842151 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.861146 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:26 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:26 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:26 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.861213 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.900320 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.908639 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.908684 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:26 crc kubenswrapper[4787]: I0126 17:46:26.935642 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.040433 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b7v9x\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.092806 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dhsdw"] Jan 26 17:46:27 crc kubenswrapper[4787]: W0126 17:46:27.115432 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1402183e_7fc9_4a3f_96ed_14aa047ffd9b.slice/crio-1a229566ad612f621f9e0bbe638ffbcc416ff82c17e3ea21359b74cb0dcab800 WatchSource:0}: Error finding container 1a229566ad612f621f9e0bbe638ffbcc416ff82c17e3ea21359b74cb0dcab800: Status 404 returned error can't find the container with id 1a229566ad612f621f9e0bbe638ffbcc416ff82c17e3ea21359b74cb0dcab800 Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.153382 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rlvsg"] Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.213689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.305466 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw544"] Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.324860 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" event={"ID":"0c5028e0-2b2d-4b00-8cc7-8c68d11ff5dc","Type":"ContainerStarted","Data":"0223dfa399250ad56a11c842505b6d91db79e2d107c4cb5f87c4ca5bb5fcff18"} Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.338916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlvsg" event={"ID":"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242","Type":"ContainerStarted","Data":"cca27a3b4aa4940835153c553d9f66ba91a8c95b664e032264e9ed7984ab38dc"} Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.378656 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerStarted","Data":"1a229566ad612f621f9e0bbe638ffbcc416ff82c17e3ea21359b74cb0dcab800"} Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.397346 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" event={"ID":"144a9368-752c-404b-9881-4a03a930b77a","Type":"ContainerStarted","Data":"99853918436b279ffd2fa73c902f59ba049b700de8c15a68f01cac5ee1570dab"} Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.418688 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-phjkm" podStartSLOduration=11.418672697 podStartE2EDuration="11.418672697s" podCreationTimestamp="2026-01-26 17:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:27.357406866 +0000 UTC m=+156.064543009" watchObservedRunningTime="2026-01-26 17:46:27.418672697 +0000 UTC m=+156.125808830" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.432923 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxkns"] Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.557099 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" podStartSLOduration=136.557078402 podStartE2EDuration="2m16.557078402s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:27.450534377 +0000 UTC m=+156.157670510" watchObservedRunningTime="2026-01-26 17:46:27.557078402 +0000 UTC m=+156.264214535" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.560113 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b7v9x"] Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.621563 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.797643 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8l69c" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.854191 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:27 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:27 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:27 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.854566 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:27 crc kubenswrapper[4787]: I0126 17:46:27.999086 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hb78l"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.000298 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.002672 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.017845 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hb78l"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.060018 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.060095 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.061833 4787 patch_prober.go:28] interesting pod/console-f9d7485db-gndbr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.061898 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gndbr" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.127409 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hs5\" (UniqueName: \"kubernetes.io/projected/52aea5c2-6bf8-4f4f-823f-df45dd468c17-kube-api-access-t7hs5\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.127559 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-catalog-content\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.127665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-utilities\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.229442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7hs5\" (UniqueName: \"kubernetes.io/projected/52aea5c2-6bf8-4f4f-823f-df45dd468c17-kube-api-access-t7hs5\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.231059 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-catalog-content\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.231199 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-utilities\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.231387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-catalog-content\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.231606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-utilities\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.255046 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7hs5\" (UniqueName: \"kubernetes.io/projected/52aea5c2-6bf8-4f4f-823f-df45dd468c17-kube-api-access-t7hs5\") pod \"redhat-marketplace-hb78l\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.257453 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.337436 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.399456 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jlq"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.400507 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.407915 4787 generic.go:334] "Generic (PLEG): container finished" podID="2e28a309-ee20-408a-b0a1-d1c457139803" containerID="f5fa366160dea108a479e35d70501195180c7b65c253f9f0ed485ff780a7f647" exitCode=0 Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.407988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" event={"ID":"2e28a309-ee20-408a-b0a1-d1c457139803","Type":"ContainerDied","Data":"f5fa366160dea108a479e35d70501195180c7b65c253f9f0ed485ff780a7f647"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.410092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" event={"ID":"68eeb1f1-7fc1-49a4-a56e-40f06deac48a","Type":"ContainerStarted","Data":"37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.410152 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" event={"ID":"68eeb1f1-7fc1-49a4-a56e-40f06deac48a","Type":"ContainerStarted","Data":"f61a70ce2598092ad10713e21f76b8bcdccc9457e74ef307753f2600f52c1b44"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.410862 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.415022 4787 generic.go:334] "Generic (PLEG): container finished" podID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerID="045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211" exitCode=0 Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.415166 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlvsg" event={"ID":"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242","Type":"ContainerDied","Data":"045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.419690 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.427546 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jlq"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.437132 4787 generic.go:334] "Generic (PLEG): container finished" podID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerID="9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40" exitCode=0 Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.437235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerDied","Data":"9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.440278 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.442750 4787 generic.go:334] "Generic (PLEG): container finished" podID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerID="ca91443675a1c5c6c273d3e90f4da6f39411d1c0fc4ed59f9039ae8b42fb1b4a" exitCode=0 Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.442815 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerDied","Data":"ca91443675a1c5c6c273d3e90f4da6f39411d1c0fc4ed59f9039ae8b42fb1b4a"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.442842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerStarted","Data":"f9617ed6ad601af7f82294520c6d71ecd6ae2756e9974d722ea984dde50dfe33"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.457907 4787 generic.go:334] "Generic (PLEG): container finished" podID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerID="475db7c66e4df82d4e3126e9a75c28cd6b5a438831083850068413a9a37db93e" exitCode=0 Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.457989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw544" event={"ID":"9e0a0430-30c3-40a8-aa3d-91c784c54e36","Type":"ContainerDied","Data":"475db7c66e4df82d4e3126e9a75c28cd6b5a438831083850068413a9a37db93e"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.458049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw544" event={"ID":"9e0a0430-30c3-40a8-aa3d-91c784c54e36","Type":"ContainerStarted","Data":"50fc53f6874f956ce83117c7cdc6df20ba30085a9820e2cfc49b00ca10404d17"} Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.483430 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" podStartSLOduration=137.483414836 podStartE2EDuration="2m17.483414836s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:28.483394936 +0000 UTC m=+157.190531079" watchObservedRunningTime="2026-01-26 17:46:28.483414836 +0000 UTC m=+157.190550969" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.534588 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-catalog-content\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.535006 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-utilities\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.535105 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsm78\" (UniqueName: \"kubernetes.io/projected/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-kube-api-access-vsm78\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.636624 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsm78\" (UniqueName: \"kubernetes.io/projected/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-kube-api-access-vsm78\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.636767 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-catalog-content\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.637005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-utilities\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.637529 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-utilities\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.639306 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-catalog-content\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.660847 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsm78\" (UniqueName: \"kubernetes.io/projected/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-kube-api-access-vsm78\") pod \"redhat-marketplace-b7jlq\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.685160 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hb78l"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.726916 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.780926 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-qgmbn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.781239 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qgmbn" podUID="f5c13435-6ec0-4a59-b905-7c4ca0d5978f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.781807 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-qgmbn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.781864 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qgmbn" podUID="f5c13435-6ec0-4a59-b905-7c4ca0d5978f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.806071 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.808503 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.818638 4787 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9fdv8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]log ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]etcd ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/max-in-flight-filter ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 17:46:28 crc kubenswrapper[4787]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 17:46:28 crc kubenswrapper[4787]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-startinformers ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 17:46:28 crc kubenswrapper[4787]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 17:46:28 crc kubenswrapper[4787]: livez check failed Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.818695 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" podUID="144a9368-752c-404b-9881-4a03a930b77a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.842427 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.845539 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.847435 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.851296 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.851525 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.854135 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:28 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:28 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:28 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.854226 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.862339 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.940166 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:28 crc kubenswrapper[4787]: I0126 17:46:28.940409 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.014111 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jlq"] Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.041627 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.041732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.041924 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.059661 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.168432 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.410102 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xgdzk"] Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.411825 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.414798 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.415472 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgdzk"] Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.501384 4787 generic.go:334] "Generic (PLEG): container finished" podID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerID="b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e" exitCode=0 Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.501491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerDied","Data":"b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e"} Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.501519 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerStarted","Data":"e684bdf58d0bed8c23a7a6e4ee658331c3c44613f2fa57ddbf5ba3e8ca82f6c2"} Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.514286 4787 generic.go:334] "Generic (PLEG): container finished" podID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerID="69d16a82358a678816fc279a3b99ecf811a48422af0d2bf5d291d3db8c983fc7" exitCode=0 Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.516109 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hb78l" event={"ID":"52aea5c2-6bf8-4f4f-823f-df45dd468c17","Type":"ContainerDied","Data":"69d16a82358a678816fc279a3b99ecf811a48422af0d2bf5d291d3db8c983fc7"} Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.516183 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hb78l" event={"ID":"52aea5c2-6bf8-4f4f-823f-df45dd468c17","Type":"ContainerStarted","Data":"a09c4cf692637453b9fc7780fc5487fe51e75b8d5a7a39bf43d774fe7b1e2052"} Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.553177 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.579773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrmh\" (UniqueName: \"kubernetes.io/projected/21da263d-6313-4403-93fc-220b5e976637-kube-api-access-sdrmh\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.579844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-catalog-content\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.579894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-utilities\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: W0126 17:46:29.658744 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ddedfeb_7e83_46b2_a30a_00566d5e1cda.slice/crio-3373d6ddcbe92a7935c2f8f74605a8359e10dd7f72b583d4808fad709ce1544f WatchSource:0}: Error finding container 3373d6ddcbe92a7935c2f8f74605a8359e10dd7f72b583d4808fad709ce1544f: Status 404 returned error can't find the container with id 3373d6ddcbe92a7935c2f8f74605a8359e10dd7f72b583d4808fad709ce1544f Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.680929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrmh\" (UniqueName: \"kubernetes.io/projected/21da263d-6313-4403-93fc-220b5e976637-kube-api-access-sdrmh\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.681070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-catalog-content\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.681284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-utilities\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.696375 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-utilities\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.696393 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-catalog-content\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.699084 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrmh\" (UniqueName: \"kubernetes.io/projected/21da263d-6313-4403-93fc-220b5e976637-kube-api-access-sdrmh\") pod \"redhat-operators-xgdzk\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.734693 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.792442 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.795871 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vdcwm"] Jan 26 17:46:29 crc kubenswrapper[4787]: E0126 17:46:29.796118 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e28a309-ee20-408a-b0a1-d1c457139803" containerName="collect-profiles" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.796135 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e28a309-ee20-408a-b0a1-d1c457139803" containerName="collect-profiles" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.796244 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e28a309-ee20-408a-b0a1-d1c457139803" containerName="collect-profiles" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.796922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.806411 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdcwm"] Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.855805 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:29 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:29 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:29 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.856098 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.898106 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-765bs\" (UniqueName: \"kubernetes.io/projected/2e28a309-ee20-408a-b0a1-d1c457139803-kube-api-access-765bs\") pod \"2e28a309-ee20-408a-b0a1-d1c457139803\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.898317 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e28a309-ee20-408a-b0a1-d1c457139803-config-volume\") pod \"2e28a309-ee20-408a-b0a1-d1c457139803\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.898377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e28a309-ee20-408a-b0a1-d1c457139803-secret-volume\") pod \"2e28a309-ee20-408a-b0a1-d1c457139803\" (UID: \"2e28a309-ee20-408a-b0a1-d1c457139803\") " Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.898484 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bc8k\" (UniqueName: \"kubernetes.io/projected/aa0ed742-766e-4e25-887e-555a5420fe8b-kube-api-access-7bc8k\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.898619 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-utilities\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.898811 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-catalog-content\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.899206 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e28a309-ee20-408a-b0a1-d1c457139803-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e28a309-ee20-408a-b0a1-d1c457139803" (UID: "2e28a309-ee20-408a-b0a1-d1c457139803"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.903909 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e28a309-ee20-408a-b0a1-d1c457139803-kube-api-access-765bs" (OuterVolumeSpecName: "kube-api-access-765bs") pod "2e28a309-ee20-408a-b0a1-d1c457139803" (UID: "2e28a309-ee20-408a-b0a1-d1c457139803"). InnerVolumeSpecName "kube-api-access-765bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:46:29 crc kubenswrapper[4787]: I0126 17:46:29.904612 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e28a309-ee20-408a-b0a1-d1c457139803-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e28a309-ee20-408a-b0a1-d1c457139803" (UID: "2e28a309-ee20-408a-b0a1-d1c457139803"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.000552 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-catalog-content\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.000626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bc8k\" (UniqueName: \"kubernetes.io/projected/aa0ed742-766e-4e25-887e-555a5420fe8b-kube-api-access-7bc8k\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.000668 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-utilities\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.000772 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e28a309-ee20-408a-b0a1-d1c457139803-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.000789 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e28a309-ee20-408a-b0a1-d1c457139803-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.000801 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-765bs\" (UniqueName: \"kubernetes.io/projected/2e28a309-ee20-408a-b0a1-d1c457139803-kube-api-access-765bs\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.001017 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-catalog-content\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.001271 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-utilities\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.019105 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bc8k\" (UniqueName: \"kubernetes.io/projected/aa0ed742-766e-4e25-887e-555a5420fe8b-kube-api-access-7bc8k\") pod \"redhat-operators-vdcwm\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.124405 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.196614 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgdzk"] Jan 26 17:46:30 crc kubenswrapper[4787]: W0126 17:46:30.252602 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21da263d_6313_4403_93fc_220b5e976637.slice/crio-a4581c4761b47a673e3f406e6af70b6a5feb7638aa564836868b45b22789e5fe WatchSource:0}: Error finding container a4581c4761b47a673e3f406e6af70b6a5feb7638aa564836868b45b22789e5fe: Status 404 returned error can't find the container with id a4581c4761b47a673e3f406e6af70b6a5feb7638aa564836868b45b22789e5fe Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.526569 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdcwm"] Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.532513 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" event={"ID":"2e28a309-ee20-408a-b0a1-d1c457139803","Type":"ContainerDied","Data":"5e0362f7fb500806ec38c17e4b443e3808d9a0578c48eff8c5ba5d664947c57b"} Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.532580 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e0362f7fb500806ec38c17e4b443e3808d9a0578c48eff8c5ba5d664947c57b" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.532632 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74" Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.542518 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerStarted","Data":"a4581c4761b47a673e3f406e6af70b6a5feb7638aa564836868b45b22789e5fe"} Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.548216 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ddedfeb-7e83-46b2-a30a-00566d5e1cda","Type":"ContainerStarted","Data":"3373d6ddcbe92a7935c2f8f74605a8359e10dd7f72b583d4808fad709ce1544f"} Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.865471 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:30 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:30 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:30 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:30 crc kubenswrapper[4787]: I0126 17:46:30.865882 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.562201 4787 generic.go:334] "Generic (PLEG): container finished" podID="21da263d-6313-4403-93fc-220b5e976637" containerID="4d0c15b5790606565321b7415acecc23c774182cd30e504653186f6bf9fcab94" exitCode=0 Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.562279 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerDied","Data":"4d0c15b5790606565321b7415acecc23c774182cd30e504653186f6bf9fcab94"} Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.564248 4787 generic.go:334] "Generic (PLEG): container finished" podID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerID="5951f3324050f06d493a28dff046c1e7c2efc725e88772fcb2982d4e99c9b4df" exitCode=0 Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.564302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerDied","Data":"5951f3324050f06d493a28dff046c1e7c2efc725e88772fcb2982d4e99c9b4df"} Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.564324 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerStarted","Data":"673c3c66261ecc11f8c6d64ec5386519924db063e08430822defe98b982d5906"} Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.570081 4787 generic.go:334] "Generic (PLEG): container finished" podID="4ddedfeb-7e83-46b2-a30a-00566d5e1cda" containerID="41c55aadade75a5773185a4548d52251fa6b25a509a776a4840e1b05a5a8a3bc" exitCode=0 Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.570130 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ddedfeb-7e83-46b2-a30a-00566d5e1cda","Type":"ContainerDied","Data":"41c55aadade75a5773185a4548d52251fa6b25a509a776a4840e1b05a5a8a3bc"} Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.861006 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:31 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:31 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:31 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:31 crc kubenswrapper[4787]: I0126 17:46:31.861060 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.166996 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.169544 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.171570 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.173578 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.173814 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.236330 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.236393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.337249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.337312 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.337356 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.431340 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.496150 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.855106 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:32 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:32 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:32 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.855357 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.868707 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:32 crc kubenswrapper[4787]: I0126 17:46:32.992299 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.048650 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kube-api-access\") pod \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.048705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kubelet-dir\") pod \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\" (UID: \"4ddedfeb-7e83-46b2-a30a-00566d5e1cda\") " Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.049100 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ddedfeb-7e83-46b2-a30a-00566d5e1cda" (UID: "4ddedfeb-7e83-46b2-a30a-00566d5e1cda"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.054667 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ddedfeb-7e83-46b2-a30a-00566d5e1cda" (UID: "4ddedfeb-7e83-46b2-a30a-00566d5e1cda"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.150864 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.150926 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ddedfeb-7e83-46b2-a30a-00566d5e1cda-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.586976 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ddedfeb-7e83-46b2-a30a-00566d5e1cda","Type":"ContainerDied","Data":"3373d6ddcbe92a7935c2f8f74605a8359e10dd7f72b583d4808fad709ce1544f"} Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.587435 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3373d6ddcbe92a7935c2f8f74605a8359e10dd7f72b583d4808fad709ce1544f" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.587017 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.599282 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc","Type":"ContainerStarted","Data":"e0103383ab7284c2951eafd063e1300b03955186be54337de9753408b4adafbb"} Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.811082 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.817541 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9fdv8" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.853262 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:33 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:33 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:33 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.853317 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.979732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:33 crc kubenswrapper[4787]: I0126 17:46:33.988839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f04b2906-5567-4455-a1e8-5d85d5ea882e-metrics-certs\") pod \"network-metrics-daemon-vkdfd\" (UID: \"f04b2906-5567-4455-a1e8-5d85d5ea882e\") " pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:34 crc kubenswrapper[4787]: I0126 17:46:34.130837 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vkdfd" Jan 26 17:46:34 crc kubenswrapper[4787]: I0126 17:46:34.429190 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vkdfd"] Jan 26 17:46:34 crc kubenswrapper[4787]: I0126 17:46:34.594997 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" event={"ID":"f04b2906-5567-4455-a1e8-5d85d5ea882e","Type":"ContainerStarted","Data":"abe427312b6fadbdd4b74b215c4d756893e1f9efee882881c79f9b2c4732af36"} Jan 26 17:46:34 crc kubenswrapper[4787]: I0126 17:46:34.604812 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xxrkw" Jan 26 17:46:34 crc kubenswrapper[4787]: I0126 17:46:34.854169 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:34 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:34 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:34 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:34 crc kubenswrapper[4787]: I0126 17:46:34.854235 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:35 crc kubenswrapper[4787]: I0126 17:46:35.601374 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc","Type":"ContainerStarted","Data":"6bcadd9bf026c9a5c3b01508dc10551766645fca580c8022658bde894eba6881"} Jan 26 17:46:35 crc kubenswrapper[4787]: I0126 17:46:35.853244 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:35 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:35 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:35 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:35 crc kubenswrapper[4787]: I0126 17:46:35.853340 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:36 crc kubenswrapper[4787]: I0126 17:46:36.610669 4787 generic.go:334] "Generic (PLEG): container finished" podID="58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc" containerID="6bcadd9bf026c9a5c3b01508dc10551766645fca580c8022658bde894eba6881" exitCode=0 Jan 26 17:46:36 crc kubenswrapper[4787]: I0126 17:46:36.610753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc","Type":"ContainerDied","Data":"6bcadd9bf026c9a5c3b01508dc10551766645fca580c8022658bde894eba6881"} Jan 26 17:46:36 crc kubenswrapper[4787]: I0126 17:46:36.615381 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" event={"ID":"f04b2906-5567-4455-a1e8-5d85d5ea882e","Type":"ContainerStarted","Data":"dcf7331e1c46c6840a99aa5f85a73259ada9a3dcdcc17ec0775c599338b5b92d"} Jan 26 17:46:36 crc kubenswrapper[4787]: I0126 17:46:36.854860 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:36 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:36 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:36 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:36 crc kubenswrapper[4787]: I0126 17:46:36.855261 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.623245 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vkdfd" event={"ID":"f04b2906-5567-4455-a1e8-5d85d5ea882e","Type":"ContainerStarted","Data":"49ad50abe1740dadf4c43922b66d8cef555a046963a7884ec89353da43a6da74"} Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.854850 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:37 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:37 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:37 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.855392 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.889003 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.966472 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kubelet-dir\") pod \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.966535 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kube-api-access\") pod \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\" (UID: \"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc\") " Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.966582 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc" (UID: "58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.966784 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:37 crc kubenswrapper[4787]: I0126 17:46:37.994197 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc" (UID: "58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.061276 4787 patch_prober.go:28] interesting pod/console-f9d7485db-gndbr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.061344 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gndbr" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.070380 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.631452 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc","Type":"ContainerDied","Data":"e0103383ab7284c2951eafd063e1300b03955186be54337de9753408b4adafbb"} Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.631495 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0103383ab7284c2951eafd063e1300b03955186be54337de9753408b4adafbb" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.631473 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.801494 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qgmbn" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.819151 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vkdfd" podStartSLOduration=147.819123607 podStartE2EDuration="2m27.819123607s" podCreationTimestamp="2026-01-26 17:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:46:38.650377936 +0000 UTC m=+167.357514209" watchObservedRunningTime="2026-01-26 17:46:38.819123607 +0000 UTC m=+167.526259740" Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.854637 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:38 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:38 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:38 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:38 crc kubenswrapper[4787]: I0126 17:46:38.854703 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:39 crc kubenswrapper[4787]: I0126 17:46:39.856705 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:39 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:39 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:39 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:39 crc kubenswrapper[4787]: I0126 17:46:39.856768 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:40 crc kubenswrapper[4787]: I0126 17:46:40.853579 4787 patch_prober.go:28] interesting pod/router-default-5444994796-gnbmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:46:40 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 26 17:46:40 crc kubenswrapper[4787]: [+]process-running ok Jan 26 17:46:40 crc kubenswrapper[4787]: healthz check failed Jan 26 17:46:40 crc kubenswrapper[4787]: I0126 17:46:40.853647 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnbmj" podUID="7ae4d848-8522-4dbf-bc18-5292c04f6a38" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:46:41 crc kubenswrapper[4787]: I0126 17:46:41.853891 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:41 crc kubenswrapper[4787]: I0126 17:46:41.856431 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gnbmj" Jan 26 17:46:46 crc kubenswrapper[4787]: I0126 17:46:46.807742 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:46:46 crc kubenswrapper[4787]: I0126 17:46:46.808201 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:46:47 crc kubenswrapper[4787]: I0126 17:46:47.220671 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:46:48 crc kubenswrapper[4787]: I0126 17:46:48.140256 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:48 crc kubenswrapper[4787]: I0126 17:46:48.149764 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:46:59 crc kubenswrapper[4787]: I0126 17:46:59.220554 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:46:59 crc kubenswrapper[4787]: I0126 17:46:59.572996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2kj2" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.362345 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 17:47:07 crc kubenswrapper[4787]: E0126 17:47:07.363169 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddedfeb-7e83-46b2-a30a-00566d5e1cda" containerName="pruner" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.363183 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddedfeb-7e83-46b2-a30a-00566d5e1cda" containerName="pruner" Jan 26 17:47:07 crc kubenswrapper[4787]: E0126 17:47:07.363203 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc" containerName="pruner" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.363215 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc" containerName="pruner" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.363335 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e76ad2-53ad-4ee2-aa8b-5391c55c4ddc" containerName="pruner" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.363350 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ddedfeb-7e83-46b2-a30a-00566d5e1cda" containerName="pruner" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.363769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.371058 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.371403 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.373225 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.458004 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.458225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.559946 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.560025 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.560181 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:07 crc kubenswrapper[4787]: I0126 17:47:07.731510 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:08 crc kubenswrapper[4787]: I0126 17:47:07.999432 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: E0126 17:47:11.104055 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 17:47:11 crc kubenswrapper[4787]: E0126 17:47:11.104502 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsr2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rlvsg_openshift-marketplace(3d6f5fee-a90c-49e0-b8e1-ed5c564c0242): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:11 crc kubenswrapper[4787]: E0126 17:47:11.105684 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rlvsg" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.361155 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.366841 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.368303 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.511548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-var-lock\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.511603 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.511626 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3407c9bf-1f3e-457a-9bda-900758dc326f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.613492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-var-lock\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.613543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.613567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3407c9bf-1f3e-457a-9bda-900758dc326f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.613636 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-var-lock\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.613755 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.632103 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3407c9bf-1f3e-457a-9bda-900758dc326f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:11 crc kubenswrapper[4787]: I0126 17:47:11.690194 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.226028 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rlvsg" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.248138 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.248511 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dxhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sw544_openshift-marketplace(9e0a0430-30c3-40a8-aa3d-91c784c54e36): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.249689 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sw544" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.360068 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.360364 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ddfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dhsdw_openshift-marketplace(1402183e-7fc9-4a3f-96ed-14aa047ffd9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.361581 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dhsdw" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.433788 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.434275 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msv4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bxkns_openshift-marketplace(07c03fa2-278b-49da-b3c9-a5b2e78a06c1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:12 crc kubenswrapper[4787]: E0126 17:47:12.435544 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bxkns" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" Jan 26 17:47:14 crc kubenswrapper[4787]: E0126 17:47:14.461820 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bxkns" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" Jan 26 17:47:14 crc kubenswrapper[4787]: E0126 17:47:14.461821 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sw544" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" Jan 26 17:47:14 crc kubenswrapper[4787]: E0126 17:47:14.461850 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dhsdw" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" Jan 26 17:47:14 crc kubenswrapper[4787]: E0126 17:47:14.535115 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 17:47:14 crc kubenswrapper[4787]: E0126 17:47:14.535285 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsm78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b7jlq_openshift-marketplace(cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:14 crc kubenswrapper[4787]: E0126 17:47:14.536468 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b7jlq" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" Jan 26 17:47:16 crc kubenswrapper[4787]: I0126 17:47:16.807583 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:47:16 crc kubenswrapper[4787]: I0126 17:47:16.807922 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:47:16 crc kubenswrapper[4787]: I0126 17:47:16.807995 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:47:16 crc kubenswrapper[4787]: I0126 17:47:16.808803 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:47:16 crc kubenswrapper[4787]: I0126 17:47:16.809381 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a" gracePeriod=600 Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.399898 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b7jlq" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.444159 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.444596 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xgdzk_openshift-marketplace(21da263d-6313-4403-93fc-220b5e976637): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.445767 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xgdzk" podUID="21da263d-6313-4403-93fc-220b5e976637" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.462313 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.462515 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7hs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hb78l_openshift-marketplace(52aea5c2-6bf8-4f4f-823f-df45dd468c17): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.464225 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hb78l" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.475780 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.476097 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bc8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vdcwm_openshift-marketplace(aa0ed742-766e-4e25-887e-555a5420fe8b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.477298 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vdcwm" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" Jan 26 17:47:17 crc kubenswrapper[4787]: I0126 17:47:17.836799 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 17:47:17 crc kubenswrapper[4787]: W0126 17:47:17.842913 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23e37ac9_ea49_4b4f_a81b_c864ebe70c18.slice/crio-383b0e639ad98b089dc04baaa972cd4712d2019e1312a6f07bf0167b0f5dc39d WatchSource:0}: Error finding container 383b0e639ad98b089dc04baaa972cd4712d2019e1312a6f07bf0167b0f5dc39d: Status 404 returned error can't find the container with id 383b0e639ad98b089dc04baaa972cd4712d2019e1312a6f07bf0167b0f5dc39d Jan 26 17:47:17 crc kubenswrapper[4787]: I0126 17:47:17.849271 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23e37ac9-ea49-4b4f-a81b-c864ebe70c18","Type":"ContainerStarted","Data":"383b0e639ad98b089dc04baaa972cd4712d2019e1312a6f07bf0167b0f5dc39d"} Jan 26 17:47:17 crc kubenswrapper[4787]: I0126 17:47:17.850829 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a" exitCode=0 Jan 26 17:47:17 crc kubenswrapper[4787]: I0126 17:47:17.851491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a"} Jan 26 17:47:17 crc kubenswrapper[4787]: I0126 17:47:17.851516 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"ba7264e4540695a88937d861739646936874b439b552b6bba19ee9dc46246481"} Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.853479 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vdcwm" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.853546 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xgdzk" podUID="21da263d-6313-4403-93fc-220b5e976637" Jan 26 17:47:17 crc kubenswrapper[4787]: E0126 17:47:17.853562 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hb78l" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" Jan 26 17:47:17 crc kubenswrapper[4787]: I0126 17:47:17.898118 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 17:47:18 crc kubenswrapper[4787]: I0126 17:47:18.858623 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3407c9bf-1f3e-457a-9bda-900758dc326f","Type":"ContainerStarted","Data":"0531e1595057e2c52ef4f2b8c4ff58d89d7b71acdbb9c8002fbbf1a01849d20c"} Jan 26 17:47:18 crc kubenswrapper[4787]: I0126 17:47:18.860304 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3407c9bf-1f3e-457a-9bda-900758dc326f","Type":"ContainerStarted","Data":"b905446547f7027107b87795d0058cb9cc1c5ade51644fd050b969b23f4fa1cb"} Jan 26 17:47:18 crc kubenswrapper[4787]: I0126 17:47:18.861839 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23e37ac9-ea49-4b4f-a81b-c864ebe70c18","Type":"ContainerStarted","Data":"9d8689c22fa4aad4d5aa30d33d0fac7e2668123523f550896ecd4a9ecc44176b"} Jan 26 17:47:18 crc kubenswrapper[4787]: I0126 17:47:18.885596 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.8855728769999995 podStartE2EDuration="7.885572877s" podCreationTimestamp="2026-01-26 17:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:47:18.874318006 +0000 UTC m=+207.581454139" watchObservedRunningTime="2026-01-26 17:47:18.885572877 +0000 UTC m=+207.592709010" Jan 26 17:47:18 crc kubenswrapper[4787]: I0126 17:47:18.896219 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.896197749 podStartE2EDuration="11.896197749s" podCreationTimestamp="2026-01-26 17:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:47:18.894982332 +0000 UTC m=+207.602118465" watchObservedRunningTime="2026-01-26 17:47:18.896197749 +0000 UTC m=+207.603333882" Jan 26 17:47:19 crc kubenswrapper[4787]: I0126 17:47:19.868912 4787 generic.go:334] "Generic (PLEG): container finished" podID="23e37ac9-ea49-4b4f-a81b-c864ebe70c18" containerID="9d8689c22fa4aad4d5aa30d33d0fac7e2668123523f550896ecd4a9ecc44176b" exitCode=0 Jan 26 17:47:19 crc kubenswrapper[4787]: I0126 17:47:19.868990 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23e37ac9-ea49-4b4f-a81b-c864ebe70c18","Type":"ContainerDied","Data":"9d8689c22fa4aad4d5aa30d33d0fac7e2668123523f550896ecd4a9ecc44176b"} Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.086282 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.235710 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kube-api-access\") pod \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.236110 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kubelet-dir\") pod \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\" (UID: \"23e37ac9-ea49-4b4f-a81b-c864ebe70c18\") " Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.236266 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23e37ac9-ea49-4b4f-a81b-c864ebe70c18" (UID: "23e37ac9-ea49-4b4f-a81b-c864ebe70c18"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.236471 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.244196 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23e37ac9-ea49-4b4f-a81b-c864ebe70c18" (UID: "23e37ac9-ea49-4b4f-a81b-c864ebe70c18"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.337801 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23e37ac9-ea49-4b4f-a81b-c864ebe70c18-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.881239 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23e37ac9-ea49-4b4f-a81b-c864ebe70c18","Type":"ContainerDied","Data":"383b0e639ad98b089dc04baaa972cd4712d2019e1312a6f07bf0167b0f5dc39d"} Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.881611 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="383b0e639ad98b089dc04baaa972cd4712d2019e1312a6f07bf0167b0f5dc39d" Jan 26 17:47:21 crc kubenswrapper[4787]: I0126 17:47:21.881338 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:47:24 crc kubenswrapper[4787]: I0126 17:47:24.898077 4787 generic.go:334] "Generic (PLEG): container finished" podID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerID="5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72" exitCode=0 Jan 26 17:47:24 crc kubenswrapper[4787]: I0126 17:47:24.898274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlvsg" event={"ID":"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242","Type":"ContainerDied","Data":"5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72"} Jan 26 17:47:26 crc kubenswrapper[4787]: I0126 17:47:26.909422 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlvsg" event={"ID":"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242","Type":"ContainerStarted","Data":"d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6"} Jan 26 17:47:26 crc kubenswrapper[4787]: I0126 17:47:26.930858 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rlvsg" podStartSLOduration=3.85321458 podStartE2EDuration="1m1.930818312s" podCreationTimestamp="2026-01-26 17:46:25 +0000 UTC" firstStartedPulling="2026-01-26 17:46:28.419460491 +0000 UTC m=+157.126596614" lastFinishedPulling="2026-01-26 17:47:26.497064213 +0000 UTC m=+215.204200346" observedRunningTime="2026-01-26 17:47:26.925540801 +0000 UTC m=+215.632676934" watchObservedRunningTime="2026-01-26 17:47:26.930818312 +0000 UTC m=+215.637954455" Jan 26 17:47:27 crc kubenswrapper[4787]: I0126 17:47:27.916720 4787 generic.go:334] "Generic (PLEG): container finished" podID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerID="1de47583fe44de423711e58b9facaff547427296a330be71029f3e6e0bfc5cca" exitCode=0 Jan 26 17:47:27 crc kubenswrapper[4787]: I0126 17:47:27.916813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw544" event={"ID":"9e0a0430-30c3-40a8-aa3d-91c784c54e36","Type":"ContainerDied","Data":"1de47583fe44de423711e58b9facaff547427296a330be71029f3e6e0bfc5cca"} Jan 26 17:47:28 crc kubenswrapper[4787]: I0126 17:47:28.923640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw544" event={"ID":"9e0a0430-30c3-40a8-aa3d-91c784c54e36","Type":"ContainerStarted","Data":"32a24df459e872b156bdf66fc5bc12c9bce78b90e837cad61794fd107aa94a09"} Jan 26 17:47:28 crc kubenswrapper[4787]: I0126 17:47:28.948574 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sw544" podStartSLOduration=3.093325517 podStartE2EDuration="1m2.948291027s" podCreationTimestamp="2026-01-26 17:46:26 +0000 UTC" firstStartedPulling="2026-01-26 17:46:28.461238547 +0000 UTC m=+157.168374670" lastFinishedPulling="2026-01-26 17:47:28.316204047 +0000 UTC m=+217.023340180" observedRunningTime="2026-01-26 17:47:28.947277756 +0000 UTC m=+217.654413909" watchObservedRunningTime="2026-01-26 17:47:28.948291027 +0000 UTC m=+217.655427290" Jan 26 17:47:29 crc kubenswrapper[4787]: I0126 17:47:29.932149 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerStarted","Data":"a44eed48ccc820b3077cf22066b0d9100b86bf8b27e2793e1823220e25feba6e"} Jan 26 17:47:29 crc kubenswrapper[4787]: I0126 17:47:29.935914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerStarted","Data":"943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0"} Jan 26 17:47:30 crc kubenswrapper[4787]: I0126 17:47:30.955753 4787 generic.go:334] "Generic (PLEG): container finished" podID="21da263d-6313-4403-93fc-220b5e976637" containerID="a44eed48ccc820b3077cf22066b0d9100b86bf8b27e2793e1823220e25feba6e" exitCode=0 Jan 26 17:47:30 crc kubenswrapper[4787]: I0126 17:47:30.956034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerDied","Data":"a44eed48ccc820b3077cf22066b0d9100b86bf8b27e2793e1823220e25feba6e"} Jan 26 17:47:30 crc kubenswrapper[4787]: I0126 17:47:30.962044 4787 generic.go:334] "Generic (PLEG): container finished" podID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerID="943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0" exitCode=0 Jan 26 17:47:30 crc kubenswrapper[4787]: I0126 17:47:30.962145 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerDied","Data":"943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0"} Jan 26 17:47:30 crc kubenswrapper[4787]: I0126 17:47:30.967325 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerStarted","Data":"51ea0901e4a8b36612d0c769efbe37c8ac94457760d22bd04e7812a63a36530a"} Jan 26 17:47:31 crc kubenswrapper[4787]: I0126 17:47:31.979412 4787 generic.go:334] "Generic (PLEG): container finished" podID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerID="a71f82b48d2281275b4460e75cdf1c06b39271ce2222073e74151f540c45f265" exitCode=0 Jan 26 17:47:31 crc kubenswrapper[4787]: I0126 17:47:31.979498 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hb78l" event={"ID":"52aea5c2-6bf8-4f4f-823f-df45dd468c17","Type":"ContainerDied","Data":"a71f82b48d2281275b4460e75cdf1c06b39271ce2222073e74151f540c45f265"} Jan 26 17:47:31 crc kubenswrapper[4787]: I0126 17:47:31.983974 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerStarted","Data":"dc0813857e7f1eb6087ffa648d7651526d9143930fdbd312b2c3b5a782358ce9"} Jan 26 17:47:31 crc kubenswrapper[4787]: I0126 17:47:31.992115 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerStarted","Data":"c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c"} Jan 26 17:47:31 crc kubenswrapper[4787]: I0126 17:47:31.996728 4787 generic.go:334] "Generic (PLEG): container finished" podID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerID="51ea0901e4a8b36612d0c769efbe37c8ac94457760d22bd04e7812a63a36530a" exitCode=0 Jan 26 17:47:31 crc kubenswrapper[4787]: I0126 17:47:31.996770 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerDied","Data":"51ea0901e4a8b36612d0c769efbe37c8ac94457760d22bd04e7812a63a36530a"} Jan 26 17:47:32 crc kubenswrapper[4787]: I0126 17:47:32.050402 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dhsdw" podStartSLOduration=3.084488281 podStartE2EDuration="1m6.050378171s" podCreationTimestamp="2026-01-26 17:46:26 +0000 UTC" firstStartedPulling="2026-01-26 17:46:28.440140938 +0000 UTC m=+157.147277071" lastFinishedPulling="2026-01-26 17:47:31.406030838 +0000 UTC m=+220.113166961" observedRunningTime="2026-01-26 17:47:32.049203836 +0000 UTC m=+220.756339969" watchObservedRunningTime="2026-01-26 17:47:32.050378171 +0000 UTC m=+220.757514314" Jan 26 17:47:32 crc kubenswrapper[4787]: I0126 17:47:32.076923 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xgdzk" podStartSLOduration=2.275551456 podStartE2EDuration="1m3.076904965s" podCreationTimestamp="2026-01-26 17:46:29 +0000 UTC" firstStartedPulling="2026-01-26 17:46:30.544571287 +0000 UTC m=+159.251707420" lastFinishedPulling="2026-01-26 17:47:31.345924796 +0000 UTC m=+220.053060929" observedRunningTime="2026-01-26 17:47:32.075158092 +0000 UTC m=+220.782294255" watchObservedRunningTime="2026-01-26 17:47:32.076904965 +0000 UTC m=+220.784041098" Jan 26 17:47:33 crc kubenswrapper[4787]: I0126 17:47:33.005910 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerStarted","Data":"fe647650de0df8335f66784bca743e55ac38a51b5bcf7bd0ddaf78cf1b7637df"} Jan 26 17:47:33 crc kubenswrapper[4787]: I0126 17:47:33.008350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hb78l" event={"ID":"52aea5c2-6bf8-4f4f-823f-df45dd468c17","Type":"ContainerStarted","Data":"22ae17ead73697363ecf39bb80e9a43639802f5d774d652bb1baa63a8bbd11e4"} Jan 26 17:47:34 crc kubenswrapper[4787]: I0126 17:47:34.031703 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bxkns" podStartSLOduration=4.036194608 podStartE2EDuration="1m8.03168301s" podCreationTimestamp="2026-01-26 17:46:26 +0000 UTC" firstStartedPulling="2026-01-26 17:46:28.445091377 +0000 UTC m=+157.152227510" lastFinishedPulling="2026-01-26 17:47:32.440579779 +0000 UTC m=+221.147715912" observedRunningTime="2026-01-26 17:47:34.030820194 +0000 UTC m=+222.737956337" watchObservedRunningTime="2026-01-26 17:47:34.03168301 +0000 UTC m=+222.738819153" Jan 26 17:47:34 crc kubenswrapper[4787]: I0126 17:47:34.054611 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hb78l" podStartSLOduration=4.176321413 podStartE2EDuration="1m7.054589414s" podCreationTimestamp="2026-01-26 17:46:27 +0000 UTC" firstStartedPulling="2026-01-26 17:46:29.517830809 +0000 UTC m=+158.224966942" lastFinishedPulling="2026-01-26 17:47:32.39609881 +0000 UTC m=+221.103234943" observedRunningTime="2026-01-26 17:47:34.054243414 +0000 UTC m=+222.761379547" watchObservedRunningTime="2026-01-26 17:47:34.054589414 +0000 UTC m=+222.761725547" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.551458 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.552393 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.627716 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.628349 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.687354 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.687429 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.757646 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.757725 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.794882 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.936213 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:47:36 crc kubenswrapper[4787]: I0126 17:47:36.936272 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.029632 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerStarted","Data":"fa5dfd7f1dd4889fc03b78c45ae695901d21a869e1187cc8f3da911b69c0ec3b"} Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.031262 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerStarted","Data":"95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09"} Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.069837 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.077600 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.080321 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.441437 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:47:37 crc kubenswrapper[4787]: I0126 17:47:37.482636 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.038274 4787 generic.go:334] "Generic (PLEG): container finished" podID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerID="fa5dfd7f1dd4889fc03b78c45ae695901d21a869e1187cc8f3da911b69c0ec3b" exitCode=0 Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.038309 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerDied","Data":"fa5dfd7f1dd4889fc03b78c45ae695901d21a869e1187cc8f3da911b69c0ec3b"} Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.040135 4787 generic.go:334] "Generic (PLEG): container finished" podID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerID="95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09" exitCode=0 Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.040302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerDied","Data":"95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09"} Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.341486 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.341578 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.385827 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:47:38 crc kubenswrapper[4787]: I0126 17:47:38.938367 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw544"] Jan 26 17:47:39 crc kubenswrapper[4787]: I0126 17:47:39.089773 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:47:39 crc kubenswrapper[4787]: I0126 17:47:39.548571 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxkns"] Jan 26 17:47:39 crc kubenswrapper[4787]: I0126 17:47:39.548852 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bxkns" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="registry-server" containerID="cri-o://fe647650de0df8335f66784bca743e55ac38a51b5bcf7bd0ddaf78cf1b7637df" gracePeriod=2 Jan 26 17:47:39 crc kubenswrapper[4787]: I0126 17:47:39.735373 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:47:39 crc kubenswrapper[4787]: I0126 17:47:39.735458 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:47:39 crc kubenswrapper[4787]: I0126 17:47:39.800647 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:47:40 crc kubenswrapper[4787]: I0126 17:47:40.048970 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sw544" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="registry-server" containerID="cri-o://32a24df459e872b156bdf66fc5bc12c9bce78b90e837cad61794fd107aa94a09" gracePeriod=2 Jan 26 17:47:40 crc kubenswrapper[4787]: I0126 17:47:40.094343 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:47:43 crc kubenswrapper[4787]: I0126 17:47:43.070848 4787 generic.go:334] "Generic (PLEG): container finished" podID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerID="fe647650de0df8335f66784bca743e55ac38a51b5bcf7bd0ddaf78cf1b7637df" exitCode=0 Jan 26 17:47:43 crc kubenswrapper[4787]: I0126 17:47:43.071213 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerDied","Data":"fe647650de0df8335f66784bca743e55ac38a51b5bcf7bd0ddaf78cf1b7637df"} Jan 26 17:47:43 crc kubenswrapper[4787]: I0126 17:47:43.073666 4787 generic.go:334] "Generic (PLEG): container finished" podID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerID="32a24df459e872b156bdf66fc5bc12c9bce78b90e837cad61794fd107aa94a09" exitCode=0 Jan 26 17:47:43 crc kubenswrapper[4787]: I0126 17:47:43.073699 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw544" event={"ID":"9e0a0430-30c3-40a8-aa3d-91c784c54e36","Type":"ContainerDied","Data":"32a24df459e872b156bdf66fc5bc12c9bce78b90e837cad61794fd107aa94a09"} Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.652323 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.737106 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content\") pod \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.737166 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msv4z\" (UniqueName: \"kubernetes.io/projected/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-kube-api-access-msv4z\") pod \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.737240 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-utilities\") pod \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.739428 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-utilities" (OuterVolumeSpecName: "utilities") pod "07c03fa2-278b-49da-b3c9-a5b2e78a06c1" (UID: "07c03fa2-278b-49da-b3c9-a5b2e78a06c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.743083 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-kube-api-access-msv4z" (OuterVolumeSpecName: "kube-api-access-msv4z") pod "07c03fa2-278b-49da-b3c9-a5b2e78a06c1" (UID: "07c03fa2-278b-49da-b3c9-a5b2e78a06c1"). InnerVolumeSpecName "kube-api-access-msv4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.839103 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.839472 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msv4z\" (UniqueName: \"kubernetes.io/projected/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-kube-api-access-msv4z\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:44 crc kubenswrapper[4787]: I0126 17:47:44.949343 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.043210 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-catalog-content\") pod \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.043307 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxhs\" (UniqueName: \"kubernetes.io/projected/9e0a0430-30c3-40a8-aa3d-91c784c54e36-kube-api-access-8dxhs\") pod \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.043388 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-utilities\") pod \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\" (UID: \"9e0a0430-30c3-40a8-aa3d-91c784c54e36\") " Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.044927 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-utilities" (OuterVolumeSpecName: "utilities") pod "9e0a0430-30c3-40a8-aa3d-91c784c54e36" (UID: "9e0a0430-30c3-40a8-aa3d-91c784c54e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.048384 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0a0430-30c3-40a8-aa3d-91c784c54e36-kube-api-access-8dxhs" (OuterVolumeSpecName: "kube-api-access-8dxhs") pod "9e0a0430-30c3-40a8-aa3d-91c784c54e36" (UID: "9e0a0430-30c3-40a8-aa3d-91c784c54e36"). InnerVolumeSpecName "kube-api-access-8dxhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.087274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw544" event={"ID":"9e0a0430-30c3-40a8-aa3d-91c784c54e36","Type":"ContainerDied","Data":"50fc53f6874f956ce83117c7cdc6df20ba30085a9820e2cfc49b00ca10404d17"} Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.087301 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw544" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.087340 4787 scope.go:117] "RemoveContainer" containerID="32a24df459e872b156bdf66fc5bc12c9bce78b90e837cad61794fd107aa94a09" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.090545 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxkns" event={"ID":"07c03fa2-278b-49da-b3c9-a5b2e78a06c1","Type":"ContainerDied","Data":"f9617ed6ad601af7f82294520c6d71ecd6ae2756e9974d722ea984dde50dfe33"} Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.090692 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxkns" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.108127 4787 scope.go:117] "RemoveContainer" containerID="1de47583fe44de423711e58b9facaff547427296a330be71029f3e6e0bfc5cca" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.126374 4787 scope.go:117] "RemoveContainer" containerID="475db7c66e4df82d4e3126e9a75c28cd6b5a438831083850068413a9a37db93e" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.140229 4787 scope.go:117] "RemoveContainer" containerID="fe647650de0df8335f66784bca743e55ac38a51b5bcf7bd0ddaf78cf1b7637df" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.146679 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dxhs\" (UniqueName: \"kubernetes.io/projected/9e0a0430-30c3-40a8-aa3d-91c784c54e36-kube-api-access-8dxhs\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.146712 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.161216 4787 scope.go:117] "RemoveContainer" containerID="51ea0901e4a8b36612d0c769efbe37c8ac94457760d22bd04e7812a63a36530a" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.174758 4787 scope.go:117] "RemoveContainer" containerID="ca91443675a1c5c6c273d3e90f4da6f39411d1c0fc4ed59f9039ae8b42fb1b4a" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.224742 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e0a0430-30c3-40a8-aa3d-91c784c54e36" (UID: "9e0a0430-30c3-40a8-aa3d-91c784c54e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.248604 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0a0430-30c3-40a8-aa3d-91c784c54e36-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.421329 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw544"] Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.425249 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sw544"] Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.551486 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c03fa2-278b-49da-b3c9-a5b2e78a06c1" (UID: "07c03fa2-278b-49da-b3c9-a5b2e78a06c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.552013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content\") pod \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\" (UID: \"07c03fa2-278b-49da-b3c9-a5b2e78a06c1\") " Jan 26 17:47:45 crc kubenswrapper[4787]: W0126 17:47:45.552046 4787 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/07c03fa2-278b-49da-b3c9-a5b2e78a06c1/volumes/kubernetes.io~empty-dir/catalog-content Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.552069 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c03fa2-278b-49da-b3c9-a5b2e78a06c1" (UID: "07c03fa2-278b-49da-b3c9-a5b2e78a06c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.552488 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c03fa2-278b-49da-b3c9-a5b2e78a06c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.595591 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" path="/var/lib/kubelet/pods/9e0a0430-30c3-40a8-aa3d-91c784c54e36/volumes" Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.711732 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxkns"] Jan 26 17:47:45 crc kubenswrapper[4787]: I0126 17:47:45.715341 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bxkns"] Jan 26 17:47:47 crc kubenswrapper[4787]: I0126 17:47:47.596365 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" path="/var/lib/kubelet/pods/07c03fa2-278b-49da-b3c9-a5b2e78a06c1/volumes" Jan 26 17:47:48 crc kubenswrapper[4787]: I0126 17:47:48.609816 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-48jfn"] Jan 26 17:47:49 crc kubenswrapper[4787]: I0126 17:47:49.127690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerStarted","Data":"265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54"} Jan 26 17:47:49 crc kubenswrapper[4787]: I0126 17:47:49.129534 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerStarted","Data":"1c6f765e41e377135181b0fcef7374b47ad7d484455c81ae124a3644aee59a36"} Jan 26 17:47:49 crc kubenswrapper[4787]: I0126 17:47:49.154977 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7jlq" podStartSLOduration=2.8375363350000002 podStartE2EDuration="1m21.1549611s" podCreationTimestamp="2026-01-26 17:46:28 +0000 UTC" firstStartedPulling="2026-01-26 17:46:29.509024364 +0000 UTC m=+158.216160497" lastFinishedPulling="2026-01-26 17:47:47.826449129 +0000 UTC m=+236.533585262" observedRunningTime="2026-01-26 17:47:49.151617978 +0000 UTC m=+237.858754111" watchObservedRunningTime="2026-01-26 17:47:49.1549611 +0000 UTC m=+237.862097233" Jan 26 17:47:49 crc kubenswrapper[4787]: I0126 17:47:49.174219 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vdcwm" podStartSLOduration=4.1538014 podStartE2EDuration="1m20.174201784s" podCreationTimestamp="2026-01-26 17:46:29 +0000 UTC" firstStartedPulling="2026-01-26 17:46:31.56573274 +0000 UTC m=+160.272868873" lastFinishedPulling="2026-01-26 17:47:47.586133114 +0000 UTC m=+236.293269257" observedRunningTime="2026-01-26 17:47:49.173575504 +0000 UTC m=+237.880711627" watchObservedRunningTime="2026-01-26 17:47:49.174201784 +0000 UTC m=+237.881337907" Jan 26 17:47:50 crc kubenswrapper[4787]: I0126 17:47:50.125635 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:47:50 crc kubenswrapper[4787]: I0126 17:47:50.126224 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:47:51 crc kubenswrapper[4787]: I0126 17:47:51.163715 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vdcwm" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="registry-server" probeResult="failure" output=< Jan 26 17:47:51 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 17:47:51 crc kubenswrapper[4787]: > Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.798039 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799016 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="extract-utilities" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799033 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="extract-utilities" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799045 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="extract-utilities" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799054 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="extract-utilities" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799074 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e37ac9-ea49-4b4f-a81b-c864ebe70c18" containerName="pruner" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799081 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e37ac9-ea49-4b4f-a81b-c864ebe70c18" containerName="pruner" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799091 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="registry-server" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799099 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="registry-server" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799111 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="extract-content" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799118 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="extract-content" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799132 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="extract-content" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799139 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="extract-content" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.799149 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="registry-server" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799157 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="registry-server" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799291 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0a0430-30c3-40a8-aa3d-91c784c54e36" containerName="registry-server" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799305 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e37ac9-ea49-4b4f-a81b-c864ebe70c18" containerName="pruner" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799312 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c03fa2-278b-49da-b3c9-a5b2e78a06c1" containerName="registry-server" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799625 4787 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799845 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784" gracePeriod=15 Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.799972 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53" gracePeriod=15 Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800009 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610" gracePeriod=15 Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800024 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800054 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f" gracePeriod=15 Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800187 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05" gracePeriod=15 Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800466 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.800588 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800599 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.800610 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800618 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.800629 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800636 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.800644 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800649 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.800686 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800692 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.800701 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800706 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800793 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800805 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800815 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800823 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.800832 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 17:47:55 crc kubenswrapper[4787]: E0126 17:47:55.833438 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.922645 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.922687 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.922937 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.923020 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.923110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.923163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.923243 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:55 crc kubenswrapper[4787]: I0126 17:47:55.923284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024602 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024650 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024807 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024860 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024912 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024980 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.024978 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.025016 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.025034 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.025005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.025065 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.025049 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.134830 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:56 crc kubenswrapper[4787]: E0126 17:47:56.153800 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e591b03097b53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 17:47:56.153125715 +0000 UTC m=+244.860261848,LastTimestamp:2026-01-26 17:47:56.153125715 +0000 UTC m=+244.860261848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.168712 4787 generic.go:334] "Generic (PLEG): container finished" podID="3407c9bf-1f3e-457a-9bda-900758dc326f" containerID="0531e1595057e2c52ef4f2b8c4ff58d89d7b71acdbb9c8002fbbf1a01849d20c" exitCode=0 Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.168769 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3407c9bf-1f3e-457a-9bda-900758dc326f","Type":"ContainerDied","Data":"0531e1595057e2c52ef4f2b8c4ff58d89d7b71acdbb9c8002fbbf1a01849d20c"} Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.169504 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.170032 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.170404 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6f35f3c96d48d099d8abf2160980ee473ec68fffc6a4618e24fffa1b7525f0f9"} Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.174026 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.174739 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53" exitCode=0 Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.174759 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05" exitCode=0 Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.174767 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f" exitCode=0 Jan 26 17:47:56 crc kubenswrapper[4787]: I0126 17:47:56.174774 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610" exitCode=2 Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.184632 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b"} Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.185454 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:57 crc kubenswrapper[4787]: E0126 17:47:57.185601 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.185858 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.436864 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.437478 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.437990 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.542719 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-var-lock\") pod \"3407c9bf-1f3e-457a-9bda-900758dc326f\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.542819 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3407c9bf-1f3e-457a-9bda-900758dc326f-kube-api-access\") pod \"3407c9bf-1f3e-457a-9bda-900758dc326f\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.542854 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-kubelet-dir\") pod \"3407c9bf-1f3e-457a-9bda-900758dc326f\" (UID: \"3407c9bf-1f3e-457a-9bda-900758dc326f\") " Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.542900 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-var-lock" (OuterVolumeSpecName: "var-lock") pod "3407c9bf-1f3e-457a-9bda-900758dc326f" (UID: "3407c9bf-1f3e-457a-9bda-900758dc326f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.543030 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3407c9bf-1f3e-457a-9bda-900758dc326f" (UID: "3407c9bf-1f3e-457a-9bda-900758dc326f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.543256 4787 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.543273 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3407c9bf-1f3e-457a-9bda-900758dc326f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.549709 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3407c9bf-1f3e-457a-9bda-900758dc326f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3407c9bf-1f3e-457a-9bda-900758dc326f" (UID: "3407c9bf-1f3e-457a-9bda-900758dc326f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:47:57 crc kubenswrapper[4787]: I0126 17:47:57.644985 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3407c9bf-1f3e-457a-9bda-900758dc326f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.173536 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.174773 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.175545 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.176023 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.203495 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.204051 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784" exitCode=0 Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.204134 4787 scope.go:117] "RemoveContainer" containerID="a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.204192 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.206348 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.206341 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3407c9bf-1f3e-457a-9bda-900758dc326f","Type":"ContainerDied","Data":"b905446547f7027107b87795d0058cb9cc1c5ade51644fd050b969b23f4fa1cb"} Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.206504 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b905446547f7027107b87795d0058cb9cc1c5ade51644fd050b969b23f4fa1cb" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.207109 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.209769 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.210071 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.216816 4787 scope.go:117] "RemoveContainer" containerID="d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.229012 4787 scope.go:117] "RemoveContainer" containerID="e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.241230 4787 scope.go:117] "RemoveContainer" containerID="30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.253279 4787 scope.go:117] "RemoveContainer" containerID="aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.267570 4787 scope.go:117] "RemoveContainer" containerID="5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.285129 4787 scope.go:117] "RemoveContainer" containerID="a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.289270 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\": container with ID starting with a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53 not found: ID does not exist" containerID="a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.289387 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53"} err="failed to get container status \"a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\": rpc error: code = NotFound desc = could not find container \"a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53\": container with ID starting with a0be4bf85de8ad4488ce37fc51cc813eea0f83f2e0c7eae26827266294b93b53 not found: ID does not exist" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.289488 4787 scope.go:117] "RemoveContainer" containerID="d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.290204 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\": container with ID starting with d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05 not found: ID does not exist" containerID="d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.290316 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05"} err="failed to get container status \"d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\": rpc error: code = NotFound desc = could not find container \"d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05\": container with ID starting with d6c17f06659b5e89997da9de5eefcf8cc5368f4d6dd020fbd91daeff20972f05 not found: ID does not exist" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.290424 4787 scope.go:117] "RemoveContainer" containerID="e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.290761 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\": container with ID starting with e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f not found: ID does not exist" containerID="e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.290889 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f"} err="failed to get container status \"e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\": rpc error: code = NotFound desc = could not find container \"e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f\": container with ID starting with e125f792cb178c8edf4740e348b1ae58f1a2f71ad359dc5d928c03efc2a1ee0f not found: ID does not exist" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.291020 4787 scope.go:117] "RemoveContainer" containerID="30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.291409 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\": container with ID starting with 30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610 not found: ID does not exist" containerID="30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.291435 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610"} err="failed to get container status \"30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\": rpc error: code = NotFound desc = could not find container \"30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610\": container with ID starting with 30c574685dbdfd6281a6a4662056f4908a141e9357d7061af49d731ae3910610 not found: ID does not exist" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.291451 4787 scope.go:117] "RemoveContainer" containerID="aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.291655 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\": container with ID starting with aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784 not found: ID does not exist" containerID="aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.291672 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784"} err="failed to get container status \"aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\": rpc error: code = NotFound desc = could not find container \"aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784\": container with ID starting with aa5cb7d812b809470b98f7f53994af6397551725d2dc8888cee20ec1bd1f5784 not found: ID does not exist" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.291685 4787 scope.go:117] "RemoveContainer" containerID="5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.291856 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\": container with ID starting with 5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40 not found: ID does not exist" containerID="5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.291870 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40"} err="failed to get container status \"5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\": rpc error: code = NotFound desc = could not find container \"5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40\": container with ID starting with 5ab8b26dd7cf753bb8428b6d3e0159f8695e4601c7716906e1c6bd647ee67e40 not found: ID does not exist" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353083 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353128 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353188 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353231 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353309 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353386 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353467 4787 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353484 4787 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.353496 4787 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.531003 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.531505 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.727874 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.727929 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.759018 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:47:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:47:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:47:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:47:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.759338 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.759536 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.759709 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.759881 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: E0126 17:47:58.759897 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.768061 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.768338 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.768511 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:58 crc kubenswrapper[4787]: I0126 17:47:58.768791 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:59 crc kubenswrapper[4787]: I0126 17:47:59.250019 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:47:59 crc kubenswrapper[4787]: I0126 17:47:59.251186 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:59 crc kubenswrapper[4787]: I0126 17:47:59.251475 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:59 crc kubenswrapper[4787]: I0126 17:47:59.251747 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:47:59 crc kubenswrapper[4787]: I0126 17:47:59.595401 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.166041 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.167108 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.167625 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.168176 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.205198 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.205709 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.206108 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:00 crc kubenswrapper[4787]: I0126 17:48:00.206372 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:00 crc kubenswrapper[4787]: E0126 17:48:00.264237 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e591b03097b53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 17:47:56.153125715 +0000 UTC m=+244.860261848,LastTimestamp:2026-01-26 17:47:56.153125715 +0000 UTC m=+244.860261848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 17:48:01 crc kubenswrapper[4787]: I0126 17:48:01.594387 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:01 crc kubenswrapper[4787]: I0126 17:48:01.594918 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:01 crc kubenswrapper[4787]: I0126 17:48:01.595337 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.712834 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.713888 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.714462 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.714988 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.715436 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:04 crc kubenswrapper[4787]: I0126 17:48:04.715477 4787 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.715900 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Jan 26 17:48:04 crc kubenswrapper[4787]: E0126 17:48:04.917211 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Jan 26 17:48:05 crc kubenswrapper[4787]: E0126 17:48:05.318273 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Jan 26 17:48:06 crc kubenswrapper[4787]: E0126 17:48:06.120022 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Jan 26 17:48:07 crc kubenswrapper[4787]: E0126 17:48:07.721336 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.267151 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.267441 4787 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed" exitCode=1 Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.267473 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed"} Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.267911 4787 scope.go:117] "RemoveContainer" containerID="e884129b4d227ea74ed705e1bcb6458143e6ded39a9e1cb3607a7c458b0cb0ed" Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.268427 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.268865 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.269223 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:08 crc kubenswrapper[4787]: I0126 17:48:08.269473 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.077341 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:48:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:48:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:48:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:48:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.077828 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.078358 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.078804 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.079462 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.079500 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.280276 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.280430 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"344c17ed7e754dfb2d8e603f29572152be1b411689cdd369e1ba49c2e60c61b0"} Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.281624 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.282170 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.282730 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.283286 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.589298 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.590885 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.591458 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.592026 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.592532 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.607319 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.607377 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:09 crc kubenswrapper[4787]: E0126 17:48:09.608015 4787 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:09 crc kubenswrapper[4787]: I0126 17:48:09.608492 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:09 crc kubenswrapper[4787]: W0126 17:48:09.636139 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-026ab4bbdca47b9cf4a9ee982a72bcb0d7777e5474e7874270f9467995f62ffe WatchSource:0}: Error finding container 026ab4bbdca47b9cf4a9ee982a72bcb0d7777e5474e7874270f9467995f62ffe: Status 404 returned error can't find the container with id 026ab4bbdca47b9cf4a9ee982a72bcb0d7777e5474e7874270f9467995f62ffe Jan 26 17:48:10 crc kubenswrapper[4787]: E0126 17:48:10.265196 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e591b03097b53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 17:47:56.153125715 +0000 UTC m=+244.860261848,LastTimestamp:2026-01-26 17:47:56.153125715 +0000 UTC m=+244.860261848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.291091 4787 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="85cae52990fad0a5cf8b4d08b153e79680f78129bd6d93b4bfefc3bfda1b7da4" exitCode=0 Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.291167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"85cae52990fad0a5cf8b4d08b153e79680f78129bd6d93b4bfefc3bfda1b7da4"} Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.291211 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"026ab4bbdca47b9cf4a9ee982a72bcb0d7777e5474e7874270f9467995f62ffe"} Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.291625 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.291655 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:10 crc kubenswrapper[4787]: E0126 17:48:10.292242 4787 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.292299 4787 status_manager.go:851] "Failed to get status for pod" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" pod="openshift-marketplace/redhat-marketplace-b7jlq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b7jlq\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.292888 4787 status_manager.go:851] "Failed to get status for pod" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.293383 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:10 crc kubenswrapper[4787]: I0126 17:48:10.293829 4787 status_manager.go:851] "Failed to get status for pod" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" pod="openshift-marketplace/redhat-operators-vdcwm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vdcwm\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 17:48:11 crc kubenswrapper[4787]: I0126 17:48:11.304303 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1cffeefb6f0c6c9e382fd3bde5028b5f4948825bebced27a651c0d062de32300"} Jan 26 17:48:11 crc kubenswrapper[4787]: I0126 17:48:11.304640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c710ed53b46e3647560dd2b4afa2674fe70977e59b781fcd0d6a1876dd4dd8f5"} Jan 26 17:48:11 crc kubenswrapper[4787]: I0126 17:48:11.304658 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"08c624ec6ae2e570f6f2913f51fdbc06bcdc9ec410cc4ed1b2d12602aea93c1d"} Jan 26 17:48:12 crc kubenswrapper[4787]: I0126 17:48:12.315653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae5a2aaaec0a42f7222d13d2344f52e3469fae05e385b3493a38b2f60789a737"} Jan 26 17:48:12 crc kubenswrapper[4787]: I0126 17:48:12.315976 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:12 crc kubenswrapper[4787]: I0126 17:48:12.315992 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"703cce7995960b0a0d1e4c873b3ba4539a21a5d89fcd244337416d5087a6d37b"} Jan 26 17:48:12 crc kubenswrapper[4787]: I0126 17:48:12.316121 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:12 crc kubenswrapper[4787]: I0126 17:48:12.316154 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:13 crc kubenswrapper[4787]: I0126 17:48:13.033260 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:48:13 crc kubenswrapper[4787]: I0126 17:48:13.033506 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 17:48:13 crc kubenswrapper[4787]: I0126 17:48:13.033568 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 17:48:13 crc kubenswrapper[4787]: I0126 17:48:13.638872 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerName="oauth-openshift" containerID="cri-o://478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3" gracePeriod=15 Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.053049 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.170769 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-session\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.170827 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gtsm\" (UniqueName: \"kubernetes.io/projected/fc5e8dd5-46e7-4849-b278-d1397195e659-kube-api-access-8gtsm\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.170869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-serving-cert\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.170979 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-cliconfig\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171036 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-login\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171066 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-router-certs\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171092 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-trusted-ca-bundle\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171117 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-provider-selection\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171147 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-ocp-branding-template\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171182 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-idp-0-file-data\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171251 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-dir\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171283 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-error\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171346 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-service-ca\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.171370 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-policies\") pod \"fc5e8dd5-46e7-4849-b278-d1397195e659\" (UID: \"fc5e8dd5-46e7-4849-b278-d1397195e659\") " Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.172799 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.172860 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.173095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.173183 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.173863 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.179119 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.179718 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.180339 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.180537 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5e8dd5-46e7-4849-b278-d1397195e659-kube-api-access-8gtsm" (OuterVolumeSpecName: "kube-api-access-8gtsm") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "kube-api-access-8gtsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.180686 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.181186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.182545 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.183301 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.186277 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fc5e8dd5-46e7-4849-b278-d1397195e659" (UID: "fc5e8dd5-46e7-4849-b278-d1397195e659"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272644 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272721 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272754 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272778 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272797 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272819 4787 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272838 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272857 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.272876 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.273784 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.273824 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gtsm\" (UniqueName: \"kubernetes.io/projected/fc5e8dd5-46e7-4849-b278-d1397195e659-kube-api-access-8gtsm\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.273992 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.274045 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.274068 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc5e8dd5-46e7-4849-b278-d1397195e659-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.328184 4787 generic.go:334] "Generic (PLEG): container finished" podID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerID="478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3" exitCode=0 Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.328221 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" event={"ID":"fc5e8dd5-46e7-4849-b278-d1397195e659","Type":"ContainerDied","Data":"478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3"} Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.328260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" event={"ID":"fc5e8dd5-46e7-4849-b278-d1397195e659","Type":"ContainerDied","Data":"038c9aefcda7e9363b3e98b55532f0b55416591500402b8bf0a2eb76db8bb055"} Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.328255 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-48jfn" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.328358 4787 scope.go:117] "RemoveContainer" containerID="478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.347560 4787 scope.go:117] "RemoveContainer" containerID="478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3" Jan 26 17:48:14 crc kubenswrapper[4787]: E0126 17:48:14.348099 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3\": container with ID starting with 478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3 not found: ID does not exist" containerID="478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.348138 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3"} err="failed to get container status \"478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3\": rpc error: code = NotFound desc = could not find container \"478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3\": container with ID starting with 478de19405f847f1e2bb7a6ea4775534a6deb97a3ba73b9a98e2c092338b23d3 not found: ID does not exist" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.609531 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.609845 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.617678 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:14 crc kubenswrapper[4787]: I0126 17:48:14.682482 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:48:17 crc kubenswrapper[4787]: I0126 17:48:17.325261 4787 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:17 crc kubenswrapper[4787]: I0126 17:48:17.347848 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:17 crc kubenswrapper[4787]: I0126 17:48:17.347899 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:17 crc kubenswrapper[4787]: I0126 17:48:17.353015 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:17 crc kubenswrapper[4787]: I0126 17:48:17.356454 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dff21c06-6cda-4504-aae6-b8f71350086f" Jan 26 17:48:17 crc kubenswrapper[4787]: E0126 17:48:17.387185 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 26 17:48:17 crc kubenswrapper[4787]: E0126 17:48:17.930923 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 26 17:48:18 crc kubenswrapper[4787]: I0126 17:48:18.353700 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:18 crc kubenswrapper[4787]: I0126 17:48:18.354543 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d334843c-0fc8-4f2b-be0d-04f020ec3259" Jan 26 17:48:21 crc kubenswrapper[4787]: I0126 17:48:21.608975 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dff21c06-6cda-4504-aae6-b8f71350086f" Jan 26 17:48:23 crc kubenswrapper[4787]: I0126 17:48:23.040118 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:48:23 crc kubenswrapper[4787]: I0126 17:48:23.050142 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:48:26 crc kubenswrapper[4787]: I0126 17:48:26.377816 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 17:48:27 crc kubenswrapper[4787]: I0126 17:48:27.279992 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 17:48:27 crc kubenswrapper[4787]: I0126 17:48:27.463224 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:48:28 crc kubenswrapper[4787]: I0126 17:48:28.474388 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 17:48:28 crc kubenswrapper[4787]: I0126 17:48:28.649037 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 17:48:28 crc kubenswrapper[4787]: I0126 17:48:28.810561 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 17:48:28 crc kubenswrapper[4787]: I0126 17:48:28.884525 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:48:28 crc kubenswrapper[4787]: I0126 17:48:28.988021 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 17:48:29 crc kubenswrapper[4787]: I0126 17:48:29.159096 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 17:48:29 crc kubenswrapper[4787]: I0126 17:48:29.166355 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 17:48:29 crc kubenswrapper[4787]: I0126 17:48:29.257748 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 17:48:29 crc kubenswrapper[4787]: I0126 17:48:29.698125 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:48:29 crc kubenswrapper[4787]: I0126 17:48:29.892745 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 17:48:29 crc kubenswrapper[4787]: I0126 17:48:29.905742 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.005527 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.024554 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.091821 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.093035 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.299194 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.302077 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.572407 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.622986 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.663584 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.762669 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.775827 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.907654 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.928994 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.945323 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 17:48:30 crc kubenswrapper[4787]: I0126 17:48:30.945829 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.077652 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.188344 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.268491 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.331718 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.380621 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.480260 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.556798 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.612302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.612919 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.797067 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.803470 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.861050 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.875790 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.934940 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.943195 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 17:48:31 crc kubenswrapper[4787]: I0126 17:48:31.960302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.032223 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.140101 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.173585 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.189387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.248586 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.370520 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.448811 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.464109 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.542143 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.606992 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.646280 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.803095 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 17:48:32 crc kubenswrapper[4787]: I0126 17:48:32.834937 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.000451 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.001347 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.027830 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.062537 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.164536 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.357880 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.366685 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.618520 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.683103 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.698865 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.710386 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.778514 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.783519 4787 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.797501 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.933629 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 17:48:33 crc kubenswrapper[4787]: I0126 17:48:33.978058 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.013990 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.028466 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.029607 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.124799 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.200348 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.216427 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.275342 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.378244 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.382301 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.406339 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.504065 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.552272 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.567672 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.570992 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.614452 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.656848 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.822603 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.916208 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 17:48:34 crc kubenswrapper[4787]: I0126 17:48:34.972561 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.089514 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.110139 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.164084 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.189007 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.243113 4787 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.316552 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.395215 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.413780 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.430223 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.450472 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.502794 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.503877 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.547502 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.607334 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.616935 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.649531 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.782186 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.820111 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.867871 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.889889 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.915850 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.967392 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 17:48:35 crc kubenswrapper[4787]: I0126 17:48:35.984414 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.031409 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.175714 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.199829 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.224746 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.233470 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.320743 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.322738 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.329621 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.347444 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.370581 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.402476 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.467840 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.502274 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.568318 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.574790 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.584064 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.697125 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.702709 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.705110 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.744148 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.755659 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.762789 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.799843 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.800922 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.815602 4787 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.820499 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-48jfn"] Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.820588 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.831253 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.836278 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.849302 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.849278473 podStartE2EDuration="19.849278473s" podCreationTimestamp="2026-01-26 17:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:48:36.843339148 +0000 UTC m=+285.550475281" watchObservedRunningTime="2026-01-26 17:48:36.849278473 +0000 UTC m=+285.556414606" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.874512 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 17:48:36 crc kubenswrapper[4787]: I0126 17:48:36.981118 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.084712 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.158092 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.183359 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.306326 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.372458 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.485823 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.510046 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.583975 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.596344 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" path="/var/lib/kubelet/pods/fc5e8dd5-46e7-4849-b278-d1397195e659/volumes" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.770925 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.826921 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.852550 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.853242 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.865820 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.919406 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.920737 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.975401 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 17:48:37 crc kubenswrapper[4787]: I0126 17:48:37.983870 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.049503 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.095617 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.151600 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.258183 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.304384 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.394115 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.395261 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.503196 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.520478 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.678075 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.812385 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.824512 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.877065 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.888219 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 17:48:38 crc kubenswrapper[4787]: I0126 17:48:38.956173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.015228 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.036222 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.243484 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.377801 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.398563 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.548054 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.571712 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.599931 4787 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.600185 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b" gracePeriod=5 Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.625168 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.664771 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.682969 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.719368 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.859679 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.888468 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.942264 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.965587 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 17:48:39 crc kubenswrapper[4787]: I0126 17:48:39.970554 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.197996 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.332440 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.348369 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.407849 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.526335 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.567973 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.787881 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.809596 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.912910 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.964528 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.974553 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 17:48:40 crc kubenswrapper[4787]: I0126 17:48:40.989471 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.029792 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.070792 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.130500 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.163071 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.220127 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.334907 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.356731 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.395137 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.402338 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.451464 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.459718 4787 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.474098 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.487110 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.688430 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.801968 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 17:48:41 crc kubenswrapper[4787]: I0126 17:48:41.970275 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.058611 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.101515 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.243417 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.300641 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.383787 4787 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.391505 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.611913 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.612359 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 17:48:42 crc kubenswrapper[4787]: I0126 17:48:42.789733 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 17:48:43 crc kubenswrapper[4787]: I0126 17:48:43.250514 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 17:48:43 crc kubenswrapper[4787]: I0126 17:48:43.297306 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 17:48:43 crc kubenswrapper[4787]: I0126 17:48:43.429493 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 17:48:43 crc kubenswrapper[4787]: I0126 17:48:43.702976 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189466 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx"] Jan 26 17:48:44 crc kubenswrapper[4787]: E0126 17:48:44.189655 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189666 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 17:48:44 crc kubenswrapper[4787]: E0126 17:48:44.189687 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerName="oauth-openshift" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189693 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerName="oauth-openshift" Jan 26 17:48:44 crc kubenswrapper[4787]: E0126 17:48:44.189703 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" containerName="installer" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189711 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" containerName="installer" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189821 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189842 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3407c9bf-1f3e-457a-9bda-900758dc326f" containerName="installer" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.189856 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5e8dd5-46e7-4849-b278-d1397195e659" containerName="oauth-openshift" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.190322 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.193388 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.193467 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.193650 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.194075 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.194285 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.194514 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.195280 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.196175 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.196561 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.197178 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.197423 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.197482 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.203859 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.212651 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.213512 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.218795 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx"] Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.326458 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351588 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-login\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351631 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351658 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351692 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-session\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351720 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-audit-policies\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351776 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351803 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9845z\" (UniqueName: \"kubernetes.io/projected/4169f86a-b21a-4302-a6c9-76c340d8e2bb-kube-api-access-9845z\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351823 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4169f86a-b21a-4302-a6c9-76c340d8e2bb-audit-dir\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351901 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.351971 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-error\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453576 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9845z\" (UniqueName: \"kubernetes.io/projected/4169f86a-b21a-4302-a6c9-76c340d8e2bb-kube-api-access-9845z\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453642 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4169f86a-b21a-4302-a6c9-76c340d8e2bb-audit-dir\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453701 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453791 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4169f86a-b21a-4302-a6c9-76c340d8e2bb-audit-dir\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453843 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.453982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-error\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454065 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-login\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454098 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454123 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454137 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454197 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-session\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-audit-policies\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.454249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.455437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-audit-policies\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.456429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.456821 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.460664 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.460749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-session\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.460915 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.461022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.461326 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.461527 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-login\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.462279 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-error\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.467382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4169f86a-b21a-4302-a6c9-76c340d8e2bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.474376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9845z\" (UniqueName: \"kubernetes.io/projected/4169f86a-b21a-4302-a6c9-76c340d8e2bb-kube-api-access-9845z\") pod \"oauth-openshift-75c5cdcdb8-rdftx\" (UID: \"4169f86a-b21a-4302-a6c9-76c340d8e2bb\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.495984 4787 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.507911 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.521587 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.757509 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.757586 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.925343 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.949076 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx"] Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.959683 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.959750 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.959793 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.959869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.959892 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.959915 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.960004 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.960038 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.960185 4787 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.960200 4787 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.960212 4787 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.960357 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:48:44 crc kubenswrapper[4787]: I0126 17:48:44.972449 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.061884 4787 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.061972 4787 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.501987 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" event={"ID":"4169f86a-b21a-4302-a6c9-76c340d8e2bb","Type":"ContainerStarted","Data":"bdb130317aaa2a8c6274773a19474c82bc61aa73004656be18cc6874228f8b30"} Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.502291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" event={"ID":"4169f86a-b21a-4302-a6c9-76c340d8e2bb","Type":"ContainerStarted","Data":"3ece4e9aeb528cfe927b3a3fd48badcadea3200a9fbfe76e69655f280ab54f69"} Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.502309 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.503844 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.503893 4787 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b" exitCode=137 Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.503930 4787 scope.go:117] "RemoveContainer" containerID="13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.503974 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.522857 4787 scope.go:117] "RemoveContainer" containerID="13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b" Jan 26 17:48:45 crc kubenswrapper[4787]: E0126 17:48:45.523416 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b\": container with ID starting with 13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b not found: ID does not exist" containerID="13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.523463 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b"} err="failed to get container status \"13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b\": rpc error: code = NotFound desc = could not find container \"13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b\": container with ID starting with 13d30697a6ae5294ce45434f213a18984f724bdab84f24c75fbf4591fc59a86b not found: ID does not exist" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.529793 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" podStartSLOduration=57.529768083 podStartE2EDuration="57.529768083s" podCreationTimestamp="2026-01-26 17:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:48:45.524887679 +0000 UTC m=+294.232023822" watchObservedRunningTime="2026-01-26 17:48:45.529768083 +0000 UTC m=+294.236904236" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.537964 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-rdftx" Jan 26 17:48:45 crc kubenswrapper[4787]: I0126 17:48:45.624585 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 17:48:51 crc kubenswrapper[4787]: I0126 17:48:51.462823 4787 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.000140 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dhsdw"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.000992 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dhsdw" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="registry-server" containerID="cri-o://c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.011396 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rlvsg"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.012431 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rlvsg" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="registry-server" containerID="cri-o://d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.028744 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mcn6"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.029027 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" podUID="00be38bb-add0-4e45-9412-e9169ee8c3dc" containerName="marketplace-operator" containerID="cri-o://72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.042548 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jlq"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.042831 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b7jlq" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="registry-server" containerID="cri-o://265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.054489 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hb78l"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.054800 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hb78l" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="registry-server" containerID="cri-o://22ae17ead73697363ecf39bb80e9a43639802f5d774d652bb1baa63a8bbd11e4" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.061840 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kgtgb"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.062873 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.082440 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vdcwm"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.082750 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vdcwm" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="registry-server" containerID="cri-o://1c6f765e41e377135181b0fcef7374b47ad7d484455c81ae124a3644aee59a36" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.090845 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgdzk"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.091393 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xgdzk" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="registry-server" containerID="cri-o://dc0813857e7f1eb6087ffa648d7651526d9143930fdbd312b2c3b5a782358ce9" gracePeriod=30 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.094823 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kgtgb"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.196001 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lpt9\" (UniqueName: \"kubernetes.io/projected/0a585137-91b4-49f6-a28f-b91c1ceb5abc-kube-api-access-7lpt9\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.196069 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a585137-91b4-49f6-a28f-b91c1ceb5abc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.196106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a585137-91b4-49f6-a28f-b91c1ceb5abc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.298280 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a585137-91b4-49f6-a28f-b91c1ceb5abc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.298737 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a585137-91b4-49f6-a28f-b91c1ceb5abc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.298883 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lpt9\" (UniqueName: \"kubernetes.io/projected/0a585137-91b4-49f6-a28f-b91c1ceb5abc-kube-api-access-7lpt9\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.300591 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a585137-91b4-49f6-a28f-b91c1ceb5abc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.307638 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a585137-91b4-49f6-a28f-b91c1ceb5abc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.318634 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lpt9\" (UniqueName: \"kubernetes.io/projected/0a585137-91b4-49f6-a28f-b91c1ceb5abc-kube-api-access-7lpt9\") pod \"marketplace-operator-79b997595-kgtgb\" (UID: \"0a585137-91b4-49f6-a28f-b91c1ceb5abc\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.383920 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.566101 4787 generic.go:334] "Generic (PLEG): container finished" podID="21da263d-6313-4403-93fc-220b5e976637" containerID="dc0813857e7f1eb6087ffa648d7651526d9143930fdbd312b2c3b5a782358ce9" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.566179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerDied","Data":"dc0813857e7f1eb6087ffa648d7651526d9143930fdbd312b2c3b5a782358ce9"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.566249 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgdzk" event={"ID":"21da263d-6313-4403-93fc-220b5e976637","Type":"ContainerDied","Data":"a4581c4761b47a673e3f406e6af70b6a5feb7638aa564836868b45b22789e5fe"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.566266 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4581c4761b47a673e3f406e6af70b6a5feb7638aa564836868b45b22789e5fe" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.572382 4787 generic.go:334] "Generic (PLEG): container finished" podID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerID="22ae17ead73697363ecf39bb80e9a43639802f5d774d652bb1baa63a8bbd11e4" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.572452 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.572473 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hb78l" event={"ID":"52aea5c2-6bf8-4f4f-823f-df45dd468c17","Type":"ContainerDied","Data":"22ae17ead73697363ecf39bb80e9a43639802f5d774d652bb1baa63a8bbd11e4"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.572517 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hb78l" event={"ID":"52aea5c2-6bf8-4f4f-823f-df45dd468c17","Type":"ContainerDied","Data":"a09c4cf692637453b9fc7780fc5487fe51e75b8d5a7a39bf43d774fe7b1e2052"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.572533 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09c4cf692637453b9fc7780fc5487fe51e75b8d5a7a39bf43d774fe7b1e2052" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.579334 4787 generic.go:334] "Generic (PLEG): container finished" podID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerID="1c6f765e41e377135181b0fcef7374b47ad7d484455c81ae124a3644aee59a36" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.579432 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerDied","Data":"1c6f765e41e377135181b0fcef7374b47ad7d484455c81ae124a3644aee59a36"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.579493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcwm" event={"ID":"aa0ed742-766e-4e25-887e-555a5420fe8b","Type":"ContainerDied","Data":"673c3c66261ecc11f8c6d64ec5386519924db063e08430822defe98b982d5906"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.579514 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="673c3c66261ecc11f8c6d64ec5386519924db063e08430822defe98b982d5906" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.591632 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.595463 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.597292 4787 generic.go:334] "Generic (PLEG): container finished" podID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerID="d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.597337 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.597405 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlvsg" event={"ID":"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242","Type":"ContainerDied","Data":"d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.597464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlvsg" event={"ID":"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242","Type":"ContainerDied","Data":"cca27a3b4aa4940835153c553d9f66ba91a8c95b664e032264e9ed7984ab38dc"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.597484 4787 scope.go:117] "RemoveContainer" containerID="d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.620512 4787 generic.go:334] "Generic (PLEG): container finished" podID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerID="c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.620577 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerDied","Data":"c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.620606 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhsdw" event={"ID":"1402183e-7fc9-4a3f-96ed-14aa047ffd9b","Type":"ContainerDied","Data":"1a229566ad612f621f9e0bbe638ffbcc416ff82c17e3ea21359b74cb0dcab800"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.620674 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhsdw" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.622139 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.623669 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.626333 4787 generic.go:334] "Generic (PLEG): container finished" podID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerID="265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.626425 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.626469 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerDied","Data":"265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.626981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jlq" event={"ID":"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b","Type":"ContainerDied","Data":"e684bdf58d0bed8c23a7a6e4ee658331c3c44613f2fa57ddbf5ba3e8ca82f6c2"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.628412 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kgtgb"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.629358 4787 generic.go:334] "Generic (PLEG): container finished" podID="00be38bb-add0-4e45-9412-e9169ee8c3dc" containerID="72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e" exitCode=0 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.629540 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" event={"ID":"00be38bb-add0-4e45-9412-e9169ee8c3dc","Type":"ContainerDied","Data":"72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.629657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" event={"ID":"00be38bb-add0-4e45-9412-e9169ee8c3dc","Type":"ContainerDied","Data":"4371cbd56233e65b826e606df876224dcb87c9f6ac25abc520b94eb44b281e7c"} Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.629750 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4mcn6" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.636229 4787 scope.go:117] "RemoveContainer" containerID="5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72" Jan 26 17:48:52 crc kubenswrapper[4787]: W0126 17:48:52.681925 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a585137_91b4_49f6_a28f_b91c1ceb5abc.slice/crio-ba0f971a0b4283cf35fb90340169617011b19d8f136c6d449ea743e76b49e035 WatchSource:0}: Error finding container ba0f971a0b4283cf35fb90340169617011b19d8f136c6d449ea743e76b49e035: Status 404 returned error can't find the container with id ba0f971a0b4283cf35fb90340169617011b19d8f136c6d449ea743e76b49e035 Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.690711 4787 scope.go:117] "RemoveContainer" containerID="045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704506 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ddfw\" (UniqueName: \"kubernetes.io/projected/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-kube-api-access-4ddfw\") pod \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704578 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-catalog-content\") pod \"21da263d-6313-4403-93fc-220b5e976637\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704632 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7hs5\" (UniqueName: \"kubernetes.io/projected/52aea5c2-6bf8-4f4f-823f-df45dd468c17-kube-api-access-t7hs5\") pod \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-utilities\") pod \"aa0ed742-766e-4e25-887e-555a5420fe8b\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704724 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdrmh\" (UniqueName: \"kubernetes.io/projected/21da263d-6313-4403-93fc-220b5e976637-kube-api-access-sdrmh\") pod \"21da263d-6313-4403-93fc-220b5e976637\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704750 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-catalog-content\") pod \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704793 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-catalog-content\") pod \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-trusted-ca\") pod \"00be38bb-add0-4e45-9412-e9169ee8c3dc\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704878 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-catalog-content\") pod \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.704904 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsr2p\" (UniqueName: \"kubernetes.io/projected/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-kube-api-access-lsr2p\") pod \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705307 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-catalog-content\") pod \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705329 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsm78\" (UniqueName: \"kubernetes.io/projected/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-kube-api-access-vsm78\") pod \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705378 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-utilities\") pod \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\" (UID: \"52aea5c2-6bf8-4f4f-823f-df45dd468c17\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705440 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-catalog-content\") pod \"aa0ed742-766e-4e25-887e-555a5420fe8b\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705465 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-utilities\") pod \"21da263d-6313-4403-93fc-220b5e976637\" (UID: \"21da263d-6313-4403-93fc-220b5e976637\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705508 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-utilities\") pod \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\" (UID: \"3d6f5fee-a90c-49e0-b8e1-ed5c564c0242\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705529 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-utilities\") pod \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\" (UID: \"cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705548 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bc8k\" (UniqueName: \"kubernetes.io/projected/aa0ed742-766e-4e25-887e-555a5420fe8b-kube-api-access-7bc8k\") pod \"aa0ed742-766e-4e25-887e-555a5420fe8b\" (UID: \"aa0ed742-766e-4e25-887e-555a5420fe8b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705580 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-operator-metrics\") pod \"00be38bb-add0-4e45-9412-e9169ee8c3dc\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705640 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-utilities\") pod \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\" (UID: \"1402183e-7fc9-4a3f-96ed-14aa047ffd9b\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.705705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spqgs\" (UniqueName: \"kubernetes.io/projected/00be38bb-add0-4e45-9412-e9169ee8c3dc-kube-api-access-spqgs\") pod \"00be38bb-add0-4e45-9412-e9169ee8c3dc\" (UID: \"00be38bb-add0-4e45-9412-e9169ee8c3dc\") " Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.709213 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-utilities" (OuterVolumeSpecName: "utilities") pod "3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" (UID: "3d6f5fee-a90c-49e0-b8e1-ed5c564c0242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.709909 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-utilities" (OuterVolumeSpecName: "utilities") pod "cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" (UID: "cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.709926 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-utilities" (OuterVolumeSpecName: "utilities") pod "aa0ed742-766e-4e25-887e-555a5420fe8b" (UID: "aa0ed742-766e-4e25-887e-555a5420fe8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.712308 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-utilities" (OuterVolumeSpecName: "utilities") pod "1402183e-7fc9-4a3f-96ed-14aa047ffd9b" (UID: "1402183e-7fc9-4a3f-96ed-14aa047ffd9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.713432 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00be38bb-add0-4e45-9412-e9169ee8c3dc-kube-api-access-spqgs" (OuterVolumeSpecName: "kube-api-access-spqgs") pod "00be38bb-add0-4e45-9412-e9169ee8c3dc" (UID: "00be38bb-add0-4e45-9412-e9169ee8c3dc"). InnerVolumeSpecName "kube-api-access-spqgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.716515 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-kube-api-access-lsr2p" (OuterVolumeSpecName: "kube-api-access-lsr2p") pod "3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" (UID: "3d6f5fee-a90c-49e0-b8e1-ed5c564c0242"). InnerVolumeSpecName "kube-api-access-lsr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.716750 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-kube-api-access-4ddfw" (OuterVolumeSpecName: "kube-api-access-4ddfw") pod "1402183e-7fc9-4a3f-96ed-14aa047ffd9b" (UID: "1402183e-7fc9-4a3f-96ed-14aa047ffd9b"). InnerVolumeSpecName "kube-api-access-4ddfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.717879 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0ed742-766e-4e25-887e-555a5420fe8b-kube-api-access-7bc8k" (OuterVolumeSpecName: "kube-api-access-7bc8k") pod "aa0ed742-766e-4e25-887e-555a5420fe8b" (UID: "aa0ed742-766e-4e25-887e-555a5420fe8b"). InnerVolumeSpecName "kube-api-access-7bc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.719750 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52aea5c2-6bf8-4f4f-823f-df45dd468c17-kube-api-access-t7hs5" (OuterVolumeSpecName: "kube-api-access-t7hs5") pod "52aea5c2-6bf8-4f4f-823f-df45dd468c17" (UID: "52aea5c2-6bf8-4f4f-823f-df45dd468c17"). InnerVolumeSpecName "kube-api-access-t7hs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.720022 4787 scope.go:117] "RemoveContainer" containerID="d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.720844 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "00be38bb-add0-4e45-9412-e9169ee8c3dc" (UID: "00be38bb-add0-4e45-9412-e9169ee8c3dc"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.721177 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-utilities" (OuterVolumeSpecName: "utilities") pod "21da263d-6313-4403-93fc-220b5e976637" (UID: "21da263d-6313-4403-93fc-220b5e976637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.721230 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6\": container with ID starting with d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6 not found: ID does not exist" containerID="d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.721273 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6"} err="failed to get container status \"d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6\": rpc error: code = NotFound desc = could not find container \"d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6\": container with ID starting with d101bec776ae3df628ec0ac8a3b21aacdb2b5b23789e29e5fdbb02688b9586c6 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.721349 4787 scope.go:117] "RemoveContainer" containerID="5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.721766 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72\": container with ID starting with 5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72 not found: ID does not exist" containerID="5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.721800 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72"} err="failed to get container status \"5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72\": rpc error: code = NotFound desc = could not find container \"5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72\": container with ID starting with 5944600d89d732a45567252dcb9f022d989cb6da6de1407b46f68265cec6ec72 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.721818 4787 scope.go:117] "RemoveContainer" containerID="045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.722061 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211\": container with ID starting with 045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211 not found: ID does not exist" containerID="045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.722083 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211"} err="failed to get container status \"045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211\": rpc error: code = NotFound desc = could not find container \"045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211\": container with ID starting with 045c76e01bda865dc1cda6a23702b270f4c4f0164ae86975ff2581f83196e211 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.722099 4787 scope.go:117] "RemoveContainer" containerID="c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.726243 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-utilities" (OuterVolumeSpecName: "utilities") pod "52aea5c2-6bf8-4f4f-823f-df45dd468c17" (UID: "52aea5c2-6bf8-4f4f-823f-df45dd468c17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.727272 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21da263d-6313-4403-93fc-220b5e976637-kube-api-access-sdrmh" (OuterVolumeSpecName: "kube-api-access-sdrmh") pod "21da263d-6313-4403-93fc-220b5e976637" (UID: "21da263d-6313-4403-93fc-220b5e976637"). InnerVolumeSpecName "kube-api-access-sdrmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.734188 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" (UID: "cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.735394 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "00be38bb-add0-4e45-9412-e9169ee8c3dc" (UID: "00be38bb-add0-4e45-9412-e9169ee8c3dc"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.735552 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-kube-api-access-vsm78" (OuterVolumeSpecName: "kube-api-access-vsm78") pod "cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" (UID: "cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b"). InnerVolumeSpecName "kube-api-access-vsm78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.759035 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52aea5c2-6bf8-4f4f-823f-df45dd468c17" (UID: "52aea5c2-6bf8-4f4f-823f-df45dd468c17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.762834 4787 scope.go:117] "RemoveContainer" containerID="943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.792797 4787 scope.go:117] "RemoveContainer" containerID="9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.797250 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1402183e-7fc9-4a3f-96ed-14aa047ffd9b" (UID: "1402183e-7fc9-4a3f-96ed-14aa047ffd9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.800991 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" (UID: "3d6f5fee-a90c-49e0-b8e1-ed5c564c0242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807109 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spqgs\" (UniqueName: \"kubernetes.io/projected/00be38bb-add0-4e45-9412-e9169ee8c3dc-kube-api-access-spqgs\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807146 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ddfw\" (UniqueName: \"kubernetes.io/projected/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-kube-api-access-4ddfw\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807162 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7hs5\" (UniqueName: \"kubernetes.io/projected/52aea5c2-6bf8-4f4f-823f-df45dd468c17-kube-api-access-t7hs5\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807176 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807188 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdrmh\" (UniqueName: \"kubernetes.io/projected/21da263d-6313-4403-93fc-220b5e976637-kube-api-access-sdrmh\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807200 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807210 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807221 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807233 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807245 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsr2p\" (UniqueName: \"kubernetes.io/projected/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-kube-api-access-lsr2p\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807256 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsm78\" (UniqueName: \"kubernetes.io/projected/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-kube-api-access-vsm78\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807269 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807280 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52aea5c2-6bf8-4f4f-823f-df45dd468c17-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807291 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807302 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807314 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bc8k\" (UniqueName: \"kubernetes.io/projected/aa0ed742-766e-4e25-887e-555a5420fe8b-kube-api-access-7bc8k\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807324 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807335 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/00be38bb-add0-4e45-9412-e9169ee8c3dc-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.807345 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1402183e-7fc9-4a3f-96ed-14aa047ffd9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.811917 4787 scope.go:117] "RemoveContainer" containerID="c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.812608 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c\": container with ID starting with c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c not found: ID does not exist" containerID="c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.812690 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c"} err="failed to get container status \"c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c\": rpc error: code = NotFound desc = could not find container \"c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c\": container with ID starting with c252e4f405ca2c6fe642f531025a6c7f84739f2cbee92ed918b4c1e05e56e12c not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.812721 4787 scope.go:117] "RemoveContainer" containerID="943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.813706 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0\": container with ID starting with 943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0 not found: ID does not exist" containerID="943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.813813 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0"} err="failed to get container status \"943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0\": rpc error: code = NotFound desc = could not find container \"943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0\": container with ID starting with 943061a6d592757c1a8f59d4025ba6be03eb4020181e5e99206c43931ad465a0 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.813903 4787 scope.go:117] "RemoveContainer" containerID="9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.814356 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40\": container with ID starting with 9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40 not found: ID does not exist" containerID="9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.814379 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40"} err="failed to get container status \"9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40\": rpc error: code = NotFound desc = could not find container \"9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40\": container with ID starting with 9df4dc3069bc681a9f2c462821b3e2d120dac6d41378730e7eabfba015ac8f40 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.814394 4787 scope.go:117] "RemoveContainer" containerID="265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.831168 4787 scope.go:117] "RemoveContainer" containerID="95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.846071 4787 scope.go:117] "RemoveContainer" containerID="b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.863128 4787 scope.go:117] "RemoveContainer" containerID="265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.867216 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54\": container with ID starting with 265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54 not found: ID does not exist" containerID="265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.867278 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54"} err="failed to get container status \"265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54\": rpc error: code = NotFound desc = could not find container \"265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54\": container with ID starting with 265cafe88c71cb5039e0226e01b805692855938d6ad364e90f74a068354afc54 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.867316 4787 scope.go:117] "RemoveContainer" containerID="95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.867799 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09\": container with ID starting with 95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09 not found: ID does not exist" containerID="95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.867835 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09"} err="failed to get container status \"95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09\": rpc error: code = NotFound desc = could not find container \"95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09\": container with ID starting with 95116ce5fbf787efba0d38ee572ef68c29fbf401101a4a99710a16d76292bc09 not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.867881 4787 scope.go:117] "RemoveContainer" containerID="b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.868223 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e\": container with ID starting with b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e not found: ID does not exist" containerID="b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.868244 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e"} err="failed to get container status \"b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e\": rpc error: code = NotFound desc = could not find container \"b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e\": container with ID starting with b3b80f45dfb99a8018f2b3bab892f97cdba4d42c09ce34afe95eab5fd6be082e not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.868277 4787 scope.go:117] "RemoveContainer" containerID="72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.882182 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21da263d-6313-4403-93fc-220b5e976637" (UID: "21da263d-6313-4403-93fc-220b5e976637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.882675 4787 scope.go:117] "RemoveContainer" containerID="72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e" Jan 26 17:48:52 crc kubenswrapper[4787]: E0126 17:48:52.883166 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e\": container with ID starting with 72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e not found: ID does not exist" containerID="72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.883208 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e"} err="failed to get container status \"72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e\": rpc error: code = NotFound desc = could not find container \"72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e\": container with ID starting with 72110792de3e14cfa010d1a7c6de3e162041996f6bb90c8bfef464cb516fdb5e not found: ID does not exist" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.890988 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa0ed742-766e-4e25-887e-555a5420fe8b" (UID: "aa0ed742-766e-4e25-887e-555a5420fe8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.908864 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21da263d-6313-4403-93fc-220b5e976637-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.908905 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0ed742-766e-4e25-887e-555a5420fe8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.945406 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dhsdw"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.947438 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dhsdw"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.959982 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mcn6"] Jan 26 17:48:52 crc kubenswrapper[4787]: I0126 17:48:52.963538 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mcn6"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.596145 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00be38bb-add0-4e45-9412-e9169ee8c3dc" path="/var/lib/kubelet/pods/00be38bb-add0-4e45-9412-e9169ee8c3dc/volumes" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.596625 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" path="/var/lib/kubelet/pods/1402183e-7fc9-4a3f-96ed-14aa047ffd9b/volumes" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.638106 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlvsg" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.639340 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7jlq" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.643968 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgdzk" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.643985 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hb78l" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.643981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" event={"ID":"0a585137-91b4-49f6-a28f-b91c1ceb5abc","Type":"ContainerStarted","Data":"859cfa96ae86f4460e9d547c48d8f92839c7e13749b905e9062664f653250485"} Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.644057 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" event={"ID":"0a585137-91b4-49f6-a28f-b91c1ceb5abc","Type":"ContainerStarted","Data":"ba0f971a0b4283cf35fb90340169617011b19d8f136c6d449ea743e76b49e035"} Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.644083 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcwm" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.644769 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.650996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.675321 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kgtgb" podStartSLOduration=1.6753010179999999 podStartE2EDuration="1.675301018s" podCreationTimestamp="2026-01-26 17:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:48:53.673487098 +0000 UTC m=+302.380623231" watchObservedRunningTime="2026-01-26 17:48:53.675301018 +0000 UTC m=+302.382437151" Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.693591 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgdzk"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.697392 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xgdzk"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.729797 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vdcwm"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.732877 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vdcwm"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.740841 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hb78l"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.744728 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hb78l"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.752100 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jlq"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.755555 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jlq"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.765085 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rlvsg"] Jan 26 17:48:53 crc kubenswrapper[4787]: I0126 17:48:53.768052 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rlvsg"] Jan 26 17:48:55 crc kubenswrapper[4787]: I0126 17:48:55.609727 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21da263d-6313-4403-93fc-220b5e976637" path="/var/lib/kubelet/pods/21da263d-6313-4403-93fc-220b5e976637/volumes" Jan 26 17:48:55 crc kubenswrapper[4787]: I0126 17:48:55.611064 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" path="/var/lib/kubelet/pods/3d6f5fee-a90c-49e0-b8e1-ed5c564c0242/volumes" Jan 26 17:48:55 crc kubenswrapper[4787]: I0126 17:48:55.611740 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" path="/var/lib/kubelet/pods/52aea5c2-6bf8-4f4f-823f-df45dd468c17/volumes" Jan 26 17:48:55 crc kubenswrapper[4787]: I0126 17:48:55.612752 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" path="/var/lib/kubelet/pods/aa0ed742-766e-4e25-887e-555a5420fe8b/volumes" Jan 26 17:48:55 crc kubenswrapper[4787]: I0126 17:48:55.613328 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" path="/var/lib/kubelet/pods/cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b/volumes" Jan 26 17:49:10 crc kubenswrapper[4787]: I0126 17:49:10.889324 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7fgjp"] Jan 26 17:49:10 crc kubenswrapper[4787]: I0126 17:49:10.890122 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" podUID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" containerName="controller-manager" containerID="cri-o://1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21" gracePeriod=30 Jan 26 17:49:10 crc kubenswrapper[4787]: I0126 17:49:10.990851 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn"] Jan 26 17:49:10 crc kubenswrapper[4787]: I0126 17:49:10.991091 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" podUID="3ec03f76-9431-4d70-84aa-c1073d8d5e44" containerName="route-controller-manager" containerID="cri-o://e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087" gracePeriod=30 Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.237272 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.324644 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.325865 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-proxy-ca-bundles\") pod \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.325917 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc208e27-2a4a-49a5-b3b4-1880249b93ed-serving-cert\") pod \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.325979 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-config\") pod \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.325999 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4r6\" (UniqueName: \"kubernetes.io/projected/bc208e27-2a4a-49a5-b3b4-1880249b93ed-kube-api-access-hg4r6\") pod \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.326047 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-client-ca\") pod \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\" (UID: \"bc208e27-2a4a-49a5-b3b4-1880249b93ed\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.326897 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc208e27-2a4a-49a5-b3b4-1880249b93ed" (UID: "bc208e27-2a4a-49a5-b3b4-1880249b93ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.327371 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bc208e27-2a4a-49a5-b3b4-1880249b93ed" (UID: "bc208e27-2a4a-49a5-b3b4-1880249b93ed"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.327416 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-config" (OuterVolumeSpecName: "config") pod "bc208e27-2a4a-49a5-b3b4-1880249b93ed" (UID: "bc208e27-2a4a-49a5-b3b4-1880249b93ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.333053 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc208e27-2a4a-49a5-b3b4-1880249b93ed-kube-api-access-hg4r6" (OuterVolumeSpecName: "kube-api-access-hg4r6") pod "bc208e27-2a4a-49a5-b3b4-1880249b93ed" (UID: "bc208e27-2a4a-49a5-b3b4-1880249b93ed"). InnerVolumeSpecName "kube-api-access-hg4r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.333239 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc208e27-2a4a-49a5-b3b4-1880249b93ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc208e27-2a4a-49a5-b3b4-1880249b93ed" (UID: "bc208e27-2a4a-49a5-b3b4-1880249b93ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427063 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-client-ca\") pod \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427463 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec03f76-9431-4d70-84aa-c1073d8d5e44-serving-cert\") pod \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427486 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-config\") pod \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427507 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jkb\" (UniqueName: \"kubernetes.io/projected/3ec03f76-9431-4d70-84aa-c1073d8d5e44-kube-api-access-45jkb\") pod \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\" (UID: \"3ec03f76-9431-4d70-84aa-c1073d8d5e44\") " Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427741 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427751 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4r6\" (UniqueName: \"kubernetes.io/projected/bc208e27-2a4a-49a5-b3b4-1880249b93ed-kube-api-access-hg4r6\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427761 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427770 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc208e27-2a4a-49a5-b3b4-1880249b93ed-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427779 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc208e27-2a4a-49a5-b3b4-1880249b93ed-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.427851 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ec03f76-9431-4d70-84aa-c1073d8d5e44" (UID: "3ec03f76-9431-4d70-84aa-c1073d8d5e44"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.428710 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-config" (OuterVolumeSpecName: "config") pod "3ec03f76-9431-4d70-84aa-c1073d8d5e44" (UID: "3ec03f76-9431-4d70-84aa-c1073d8d5e44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.430706 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec03f76-9431-4d70-84aa-c1073d8d5e44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ec03f76-9431-4d70-84aa-c1073d8d5e44" (UID: "3ec03f76-9431-4d70-84aa-c1073d8d5e44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.430739 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec03f76-9431-4d70-84aa-c1073d8d5e44-kube-api-access-45jkb" (OuterVolumeSpecName: "kube-api-access-45jkb") pod "3ec03f76-9431-4d70-84aa-c1073d8d5e44" (UID: "3ec03f76-9431-4d70-84aa-c1073d8d5e44"). InnerVolumeSpecName "kube-api-access-45jkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.529264 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.529298 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec03f76-9431-4d70-84aa-c1073d8d5e44-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.529306 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec03f76-9431-4d70-84aa-c1073d8d5e44-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.529315 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jkb\" (UniqueName: \"kubernetes.io/projected/3ec03f76-9431-4d70-84aa-c1073d8d5e44-kube-api-access-45jkb\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.736302 4787 generic.go:334] "Generic (PLEG): container finished" podID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" containerID="1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21" exitCode=0 Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.736386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" event={"ID":"bc208e27-2a4a-49a5-b3b4-1880249b93ed","Type":"ContainerDied","Data":"1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21"} Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.736392 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.736418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7fgjp" event={"ID":"bc208e27-2a4a-49a5-b3b4-1880249b93ed","Type":"ContainerDied","Data":"bf5336230ee124a20173854b2b3922fe2bc6de9531f69455b3e57b246100b79f"} Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.736435 4787 scope.go:117] "RemoveContainer" containerID="1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.738684 4787 generic.go:334] "Generic (PLEG): container finished" podID="3ec03f76-9431-4d70-84aa-c1073d8d5e44" containerID="e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087" exitCode=0 Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.738704 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" event={"ID":"3ec03f76-9431-4d70-84aa-c1073d8d5e44","Type":"ContainerDied","Data":"e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087"} Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.738723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" event={"ID":"3ec03f76-9431-4d70-84aa-c1073d8d5e44","Type":"ContainerDied","Data":"5a5f92c307d80c71dedde26774bd476f3addd6313d46fe6c54e76c494e994bfe"} Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.738766 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.755452 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7fgjp"] Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.759096 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7fgjp"] Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.762902 4787 scope.go:117] "RemoveContainer" containerID="1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.763043 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn"] Jan 26 17:49:11 crc kubenswrapper[4787]: E0126 17:49:11.763455 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21\": container with ID starting with 1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21 not found: ID does not exist" containerID="1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.763521 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21"} err="failed to get container status \"1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21\": rpc error: code = NotFound desc = could not find container \"1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21\": container with ID starting with 1d14f19cb8c3b47577bc68fe6a8653b2721520e06382788ac386e669b70c5e21 not found: ID does not exist" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.763566 4787 scope.go:117] "RemoveContainer" containerID="e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.767163 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kblzn"] Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.779369 4787 scope.go:117] "RemoveContainer" containerID="e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087" Jan 26 17:49:11 crc kubenswrapper[4787]: E0126 17:49:11.779847 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087\": container with ID starting with e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087 not found: ID does not exist" containerID="e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087" Jan 26 17:49:11 crc kubenswrapper[4787]: I0126 17:49:11.779898 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087"} err="failed to get container status \"e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087\": rpc error: code = NotFound desc = could not find container \"e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087\": container with ID starting with e7f72cb1dd042f394319d87472499f42bdcb374b052e3c22983fa868bd374087 not found: ID does not exist" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.205706 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5"] Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.205948 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.205976 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.205984 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.205991 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.205997 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206002 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206011 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206016 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206024 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206030 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206039 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206044 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206054 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec03f76-9431-4d70-84aa-c1073d8d5e44" containerName="route-controller-manager" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206059 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec03f76-9431-4d70-84aa-c1073d8d5e44" containerName="route-controller-manager" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206066 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206072 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206080 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206085 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206101 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206107 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206115 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206121 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206128 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206133 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206142 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206147 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206154 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206160 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206167 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be38bb-add0-4e45-9412-e9169ee8c3dc" containerName="marketplace-operator" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206172 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be38bb-add0-4e45-9412-e9169ee8c3dc" containerName="marketplace-operator" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206181 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206187 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206193 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206198 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206204 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" containerName="controller-manager" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206210 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" containerName="controller-manager" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206217 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206222 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206230 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206236 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="extract-content" Jan 26 17:49:12 crc kubenswrapper[4787]: E0126 17:49:12.206243 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206249 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="extract-utilities" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206336 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="00be38bb-add0-4e45-9412-e9169ee8c3dc" containerName="marketplace-operator" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206347 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbda1b3c-5e5b-470c-a7fb-3e7fe19e205b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206354 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0ed742-766e-4e25-887e-555a5420fe8b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206360 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" containerName="controller-manager" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206369 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="52aea5c2-6bf8-4f4f-823f-df45dd468c17" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206375 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6f5fee-a90c-49e0-b8e1-ed5c564c0242" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206382 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec03f76-9431-4d70-84aa-c1073d8d5e44" containerName="route-controller-manager" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206392 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="21da263d-6313-4403-93fc-220b5e976637" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206398 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1402183e-7fc9-4a3f-96ed-14aa047ffd9b" containerName="registry-server" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.206776 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.209658 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.209836 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.210040 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.210202 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.210339 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.210575 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.211451 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f58f847f-9vjrf"] Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.212269 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.213723 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.213803 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.213860 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.213988 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.214015 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.214044 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.217292 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f58f847f-9vjrf"] Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.222051 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5"] Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.222639 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.234892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe3f4b-9983-4963-a0fb-8a153d178b72-serving-cert\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.234940 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctrb\" (UniqueName: \"kubernetes.io/projected/68fe3f4b-9983-4963-a0fb-8a153d178b72-kube-api-access-zctrb\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.234979 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-proxy-ca-bundles\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.235008 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-client-ca\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.235131 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-config\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.235200 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-client-ca\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.235225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6mn7\" (UniqueName: \"kubernetes.io/projected/72e237d5-13b6-40df-a9ce-6b73b134a064-kube-api-access-s6mn7\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.235259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72e237d5-13b6-40df-a9ce-6b73b134a064-serving-cert\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.235279 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-config\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336352 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-client-ca\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336405 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-config\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336439 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-client-ca\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336460 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6mn7\" (UniqueName: \"kubernetes.io/projected/72e237d5-13b6-40df-a9ce-6b73b134a064-kube-api-access-s6mn7\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336482 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72e237d5-13b6-40df-a9ce-6b73b134a064-serving-cert\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336500 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-config\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336519 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe3f4b-9983-4963-a0fb-8a153d178b72-serving-cert\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336546 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctrb\" (UniqueName: \"kubernetes.io/projected/68fe3f4b-9983-4963-a0fb-8a153d178b72-kube-api-access-zctrb\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.336566 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-proxy-ca-bundles\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.337791 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-config\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.337803 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-proxy-ca-bundles\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.337874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-config\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.337968 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-client-ca\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.338521 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-client-ca\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.341957 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72e237d5-13b6-40df-a9ce-6b73b134a064-serving-cert\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.349949 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe3f4b-9983-4963-a0fb-8a153d178b72-serving-cert\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.352059 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctrb\" (UniqueName: \"kubernetes.io/projected/68fe3f4b-9983-4963-a0fb-8a153d178b72-kube-api-access-zctrb\") pod \"route-controller-manager-548c896485-kvxr5\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.354842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6mn7\" (UniqueName: \"kubernetes.io/projected/72e237d5-13b6-40df-a9ce-6b73b134a064-kube-api-access-s6mn7\") pod \"controller-manager-f58f847f-9vjrf\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.536282 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.542402 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:12 crc kubenswrapper[4787]: I0126 17:49:12.955498 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5"] Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.006068 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f58f847f-9vjrf"] Jan 26 17:49:13 crc kubenswrapper[4787]: W0126 17:49:13.006955 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e237d5_13b6_40df_a9ce_6b73b134a064.slice/crio-f89ff3791145c119dcdc05aa25c1f7d857711a34255275f8ee55649ad1834321 WatchSource:0}: Error finding container f89ff3791145c119dcdc05aa25c1f7d857711a34255275f8ee55649ad1834321: Status 404 returned error can't find the container with id f89ff3791145c119dcdc05aa25c1f7d857711a34255275f8ee55649ad1834321 Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.598068 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec03f76-9431-4d70-84aa-c1073d8d5e44" path="/var/lib/kubelet/pods/3ec03f76-9431-4d70-84aa-c1073d8d5e44/volumes" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.599264 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc208e27-2a4a-49a5-b3b4-1880249b93ed" path="/var/lib/kubelet/pods/bc208e27-2a4a-49a5-b3b4-1880249b93ed/volumes" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.752308 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" event={"ID":"72e237d5-13b6-40df-a9ce-6b73b134a064","Type":"ContainerStarted","Data":"48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1"} Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.752375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" event={"ID":"72e237d5-13b6-40df-a9ce-6b73b134a064","Type":"ContainerStarted","Data":"f89ff3791145c119dcdc05aa25c1f7d857711a34255275f8ee55649ad1834321"} Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.752516 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.753927 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" event={"ID":"68fe3f4b-9983-4963-a0fb-8a153d178b72","Type":"ContainerStarted","Data":"935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d"} Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.754001 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" event={"ID":"68fe3f4b-9983-4963-a0fb-8a153d178b72","Type":"ContainerStarted","Data":"a8487e0d5a849f4f7c0c1bdcb467b6a1f1fb8b5a5935049016e5745975b293ed"} Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.754019 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.760737 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.786853 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" podStartSLOduration=3.786833454 podStartE2EDuration="3.786833454s" podCreationTimestamp="2026-01-26 17:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:49:13.781941289 +0000 UTC m=+322.489077422" watchObservedRunningTime="2026-01-26 17:49:13.786833454 +0000 UTC m=+322.493969587" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.807380 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" podStartSLOduration=2.80736073 podStartE2EDuration="2.80736073s" podCreationTimestamp="2026-01-26 17:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:49:13.804908693 +0000 UTC m=+322.512044826" watchObservedRunningTime="2026-01-26 17:49:13.80736073 +0000 UTC m=+322.514496863" Jan 26 17:49:13 crc kubenswrapper[4787]: I0126 17:49:13.920045 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.272264 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f58f847f-9vjrf"] Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.272722 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" podUID="72e237d5-13b6-40df-a9ce-6b73b134a064" containerName="controller-manager" containerID="cri-o://48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1" gracePeriod=30 Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.285058 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5"] Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.285239 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" podUID="68fe3f4b-9983-4963-a0fb-8a153d178b72" containerName="route-controller-manager" containerID="cri-o://935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d" gracePeriod=30 Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.764694 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.769664 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.774023 4787 generic.go:334] "Generic (PLEG): container finished" podID="72e237d5-13b6-40df-a9ce-6b73b134a064" containerID="48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1" exitCode=0 Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.774111 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.774110 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" event={"ID":"72e237d5-13b6-40df-a9ce-6b73b134a064","Type":"ContainerDied","Data":"48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1"} Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.774227 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f58f847f-9vjrf" event={"ID":"72e237d5-13b6-40df-a9ce-6b73b134a064","Type":"ContainerDied","Data":"f89ff3791145c119dcdc05aa25c1f7d857711a34255275f8ee55649ad1834321"} Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.774257 4787 scope.go:117] "RemoveContainer" containerID="48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.778044 4787 generic.go:334] "Generic (PLEG): container finished" podID="68fe3f4b-9983-4963-a0fb-8a153d178b72" containerID="935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d" exitCode=0 Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.778095 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" event={"ID":"68fe3f4b-9983-4963-a0fb-8a153d178b72","Type":"ContainerDied","Data":"935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d"} Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.778112 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.778139 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5" event={"ID":"68fe3f4b-9983-4963-a0fb-8a153d178b72","Type":"ContainerDied","Data":"a8487e0d5a849f4f7c0c1bdcb467b6a1f1fb8b5a5935049016e5745975b293ed"} Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.805697 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-client-ca\") pod \"72e237d5-13b6-40df-a9ce-6b73b134a064\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.805783 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe3f4b-9983-4963-a0fb-8a153d178b72-serving-cert\") pod \"68fe3f4b-9983-4963-a0fb-8a153d178b72\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.805825 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72e237d5-13b6-40df-a9ce-6b73b134a064-serving-cert\") pod \"72e237d5-13b6-40df-a9ce-6b73b134a064\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.805866 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-client-ca\") pod \"68fe3f4b-9983-4963-a0fb-8a153d178b72\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.805918 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-config\") pod \"72e237d5-13b6-40df-a9ce-6b73b134a064\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.807150 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-config\") pod \"68fe3f4b-9983-4963-a0fb-8a153d178b72\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.807212 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6mn7\" (UniqueName: \"kubernetes.io/projected/72e237d5-13b6-40df-a9ce-6b73b134a064-kube-api-access-s6mn7\") pod \"72e237d5-13b6-40df-a9ce-6b73b134a064\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.807237 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-proxy-ca-bundles\") pod \"72e237d5-13b6-40df-a9ce-6b73b134a064\" (UID: \"72e237d5-13b6-40df-a9ce-6b73b134a064\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.807279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zctrb\" (UniqueName: \"kubernetes.io/projected/68fe3f4b-9983-4963-a0fb-8a153d178b72-kube-api-access-zctrb\") pod \"68fe3f4b-9983-4963-a0fb-8a153d178b72\" (UID: \"68fe3f4b-9983-4963-a0fb-8a153d178b72\") " Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.809039 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-config" (OuterVolumeSpecName: "config") pod "68fe3f4b-9983-4963-a0fb-8a153d178b72" (UID: "68fe3f4b-9983-4963-a0fb-8a153d178b72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.808881 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-client-ca" (OuterVolumeSpecName: "client-ca") pod "68fe3f4b-9983-4963-a0fb-8a153d178b72" (UID: "68fe3f4b-9983-4963-a0fb-8a153d178b72"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.809411 4787 scope.go:117] "RemoveContainer" containerID="48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.809905 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-client-ca" (OuterVolumeSpecName: "client-ca") pod "72e237d5-13b6-40df-a9ce-6b73b134a064" (UID: "72e237d5-13b6-40df-a9ce-6b73b134a064"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.810020 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "72e237d5-13b6-40df-a9ce-6b73b134a064" (UID: "72e237d5-13b6-40df-a9ce-6b73b134a064"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.810154 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-config" (OuterVolumeSpecName: "config") pod "72e237d5-13b6-40df-a9ce-6b73b134a064" (UID: "72e237d5-13b6-40df-a9ce-6b73b134a064"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.813365 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68fe3f4b-9983-4963-a0fb-8a153d178b72-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "68fe3f4b-9983-4963-a0fb-8a153d178b72" (UID: "68fe3f4b-9983-4963-a0fb-8a153d178b72"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.813791 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e237d5-13b6-40df-a9ce-6b73b134a064-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72e237d5-13b6-40df-a9ce-6b73b134a064" (UID: "72e237d5-13b6-40df-a9ce-6b73b134a064"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.814093 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e237d5-13b6-40df-a9ce-6b73b134a064-kube-api-access-s6mn7" (OuterVolumeSpecName: "kube-api-access-s6mn7") pod "72e237d5-13b6-40df-a9ce-6b73b134a064" (UID: "72e237d5-13b6-40df-a9ce-6b73b134a064"). InnerVolumeSpecName "kube-api-access-s6mn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.815262 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68fe3f4b-9983-4963-a0fb-8a153d178b72-kube-api-access-zctrb" (OuterVolumeSpecName: "kube-api-access-zctrb") pod "68fe3f4b-9983-4963-a0fb-8a153d178b72" (UID: "68fe3f4b-9983-4963-a0fb-8a153d178b72"). InnerVolumeSpecName "kube-api-access-zctrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:49:17 crc kubenswrapper[4787]: E0126 17:49:17.816582 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1\": container with ID starting with 48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1 not found: ID does not exist" containerID="48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.816623 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1"} err="failed to get container status \"48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1\": rpc error: code = NotFound desc = could not find container \"48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1\": container with ID starting with 48a009a7085724d72dac5061072c36af57c4c4de325013026b5301229709fab1 not found: ID does not exist" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.816652 4787 scope.go:117] "RemoveContainer" containerID="935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.833813 4787 scope.go:117] "RemoveContainer" containerID="935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d" Jan 26 17:49:17 crc kubenswrapper[4787]: E0126 17:49:17.834480 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d\": container with ID starting with 935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d not found: ID does not exist" containerID="935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.834531 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d"} err="failed to get container status \"935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d\": rpc error: code = NotFound desc = could not find container \"935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d\": container with ID starting with 935f4c4910f8c40068e4f5bf6598abd808b0cd18d8a5609e3d396dc52f787f1d not found: ID does not exist" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908682 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe3f4b-9983-4963-a0fb-8a153d178b72-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908716 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72e237d5-13b6-40df-a9ce-6b73b134a064-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908725 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908734 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908742 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68fe3f4b-9983-4963-a0fb-8a153d178b72-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908749 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908758 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6mn7\" (UniqueName: \"kubernetes.io/projected/72e237d5-13b6-40df-a9ce-6b73b134a064-kube-api-access-s6mn7\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908845 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zctrb\" (UniqueName: \"kubernetes.io/projected/68fe3f4b-9983-4963-a0fb-8a153d178b72-kube-api-access-zctrb\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:17 crc kubenswrapper[4787]: I0126 17:49:17.908853 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72e237d5-13b6-40df-a9ce-6b73b134a064-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:49:18 crc kubenswrapper[4787]: I0126 17:49:18.114666 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5"] Jan 26 17:49:18 crc kubenswrapper[4787]: I0126 17:49:18.117791 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548c896485-kvxr5"] Jan 26 17:49:18 crc kubenswrapper[4787]: I0126 17:49:18.131407 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f58f847f-9vjrf"] Jan 26 17:49:18 crc kubenswrapper[4787]: I0126 17:49:18.134627 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f58f847f-9vjrf"] Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.213269 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk"] Jan 26 17:49:19 crc kubenswrapper[4787]: E0126 17:49:19.213533 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68fe3f4b-9983-4963-a0fb-8a153d178b72" containerName="route-controller-manager" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.213550 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="68fe3f4b-9983-4963-a0fb-8a153d178b72" containerName="route-controller-manager" Jan 26 17:49:19 crc kubenswrapper[4787]: E0126 17:49:19.213567 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e237d5-13b6-40df-a9ce-6b73b134a064" containerName="controller-manager" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.213576 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e237d5-13b6-40df-a9ce-6b73b134a064" containerName="controller-manager" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.213668 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e237d5-13b6-40df-a9ce-6b73b134a064" containerName="controller-manager" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.213686 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="68fe3f4b-9983-4963-a0fb-8a153d178b72" containerName="route-controller-manager" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.214304 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.217709 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.217794 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.217992 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.218023 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.219706 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.219912 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.221937 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-nlmwd"] Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.223070 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.228874 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-nlmwd"] Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.273632 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.273913 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.273937 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.274237 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.274478 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.276377 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.281040 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.281349 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk"] Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.325706 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-config\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.325764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-config\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.325792 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-client-ca\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.325864 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-client-ca\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.325888 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnc6\" (UniqueName: \"kubernetes.io/projected/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-kube-api-access-cjnc6\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.326076 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e957e72b-a032-4c37-930d-93f51e5c0f5a-serving-cert\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.326138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-serving-cert\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.326208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9x52\" (UniqueName: \"kubernetes.io/projected/e957e72b-a032-4c37-930d-93f51e5c0f5a-kube-api-access-p9x52\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.326340 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-proxy-ca-bundles\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427376 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-config\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427427 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-config\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427447 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-client-ca\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427465 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-client-ca\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnc6\" (UniqueName: \"kubernetes.io/projected/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-kube-api-access-cjnc6\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e957e72b-a032-4c37-930d-93f51e5c0f5a-serving-cert\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427539 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-serving-cert\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427564 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9x52\" (UniqueName: \"kubernetes.io/projected/e957e72b-a032-4c37-930d-93f51e5c0f5a-kube-api-access-p9x52\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.427589 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-proxy-ca-bundles\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.428790 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-client-ca\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.428813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-proxy-ca-bundles\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.429188 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-client-ca\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.430856 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-config\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.433788 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-serving-cert\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.437380 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e957e72b-a032-4c37-930d-93f51e5c0f5a-serving-cert\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.445102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-config\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.445929 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnc6\" (UniqueName: \"kubernetes.io/projected/e8ebc024-bca3-4bc8-9b6f-6edceee55fe3-kube-api-access-cjnc6\") pod \"route-controller-manager-8465d68986-rmdqk\" (UID: \"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3\") " pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.448606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9x52\" (UniqueName: \"kubernetes.io/projected/e957e72b-a032-4c37-930d-93f51e5c0f5a-kube-api-access-p9x52\") pod \"controller-manager-cc8d77586-nlmwd\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.589622 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.596300 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68fe3f4b-9983-4963-a0fb-8a153d178b72" path="/var/lib/kubelet/pods/68fe3f4b-9983-4963-a0fb-8a153d178b72/volumes" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.597840 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e237d5-13b6-40df-a9ce-6b73b134a064" path="/var/lib/kubelet/pods/72e237d5-13b6-40df-a9ce-6b73b134a064/volumes" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.600038 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.807773 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk"] Jan 26 17:49:19 crc kubenswrapper[4787]: I0126 17:49:19.985187 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-nlmwd"] Jan 26 17:49:19 crc kubenswrapper[4787]: W0126 17:49:19.992300 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode957e72b_a032_4c37_930d_93f51e5c0f5a.slice/crio-d3edacd08ad4f0bd891d704e748cee37791f8c5f28e80ed37531c3c937126ac0 WatchSource:0}: Error finding container d3edacd08ad4f0bd891d704e748cee37791f8c5f28e80ed37531c3c937126ac0: Status 404 returned error can't find the container with id d3edacd08ad4f0bd891d704e748cee37791f8c5f28e80ed37531c3c937126ac0 Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.801544 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" event={"ID":"e957e72b-a032-4c37-930d-93f51e5c0f5a","Type":"ContainerStarted","Data":"53530ff87d28d044d6b50177ad1366d4fcdbaa69641f6f48c967470d1526aa01"} Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.802284 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.802348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" event={"ID":"e957e72b-a032-4c37-930d-93f51e5c0f5a","Type":"ContainerStarted","Data":"d3edacd08ad4f0bd891d704e748cee37791f8c5f28e80ed37531c3c937126ac0"} Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.802812 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" event={"ID":"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3","Type":"ContainerStarted","Data":"3fb4da629f0043b6800262c29212628429db43deb8cdadbb8865f78250bb1479"} Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.802844 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" event={"ID":"e8ebc024-bca3-4bc8-9b6f-6edceee55fe3","Type":"ContainerStarted","Data":"3bde651c299e46180c9947deac51e07013cf2ab1eb25416b5735a143385236de"} Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.803264 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.807613 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.807681 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.819447 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" podStartSLOduration=3.819427133 podStartE2EDuration="3.819427133s" podCreationTimestamp="2026-01-26 17:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:49:20.816502058 +0000 UTC m=+329.523638191" watchObservedRunningTime="2026-01-26 17:49:20.819427133 +0000 UTC m=+329.526563276" Jan 26 17:49:20 crc kubenswrapper[4787]: I0126 17:49:20.852403 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8465d68986-rmdqk" podStartSLOduration=3.8523846390000003 podStartE2EDuration="3.852384639s" podCreationTimestamp="2026-01-26 17:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:49:20.850453951 +0000 UTC m=+329.557590084" watchObservedRunningTime="2026-01-26 17:49:20.852384639 +0000 UTC m=+329.559520772" Jan 26 17:49:46 crc kubenswrapper[4787]: I0126 17:49:46.807971 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:49:46 crc kubenswrapper[4787]: I0126 17:49:46.808526 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.274992 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5c48d"] Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.278431 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.285404 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c48d"] Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.286689 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.428537 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k56m\" (UniqueName: \"kubernetes.io/projected/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-kube-api-access-5k56m\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.428654 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-utilities\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.428679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-catalog-content\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.462710 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jlbjr"] Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.463979 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.466907 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.482031 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlbjr"] Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.529285 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-utilities\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.529329 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-catalog-content\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.529363 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k56m\" (UniqueName: \"kubernetes.io/projected/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-kube-api-access-5k56m\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.529882 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-utilities\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.530089 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-catalog-content\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.547440 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k56m\" (UniqueName: \"kubernetes.io/projected/3bbf5ecf-4b0c-4a79-8024-9b50976c66ba-kube-api-access-5k56m\") pod \"redhat-marketplace-5c48d\" (UID: \"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba\") " pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.602884 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.631240 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxx6\" (UniqueName: \"kubernetes.io/projected/17348c52-3049-45fc-8d0f-1be5551be571-kube-api-access-lfxx6\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.631292 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17348c52-3049-45fc-8d0f-1be5551be571-catalog-content\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.631349 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17348c52-3049-45fc-8d0f-1be5551be571-utilities\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.732325 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17348c52-3049-45fc-8d0f-1be5551be571-catalog-content\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.732796 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17348c52-3049-45fc-8d0f-1be5551be571-utilities\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.732978 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxx6\" (UniqueName: \"kubernetes.io/projected/17348c52-3049-45fc-8d0f-1be5551be571-kube-api-access-lfxx6\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.733386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17348c52-3049-45fc-8d0f-1be5551be571-catalog-content\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.734338 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17348c52-3049-45fc-8d0f-1be5551be571-utilities\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.767758 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxx6\" (UniqueName: \"kubernetes.io/projected/17348c52-3049-45fc-8d0f-1be5551be571-kube-api-access-lfxx6\") pod \"redhat-operators-jlbjr\" (UID: \"17348c52-3049-45fc-8d0f-1be5551be571\") " pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.781371 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:03 crc kubenswrapper[4787]: I0126 17:50:03.979916 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c48d"] Jan 26 17:50:04 crc kubenswrapper[4787]: I0126 17:50:04.024183 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c48d" event={"ID":"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba","Type":"ContainerStarted","Data":"b629f848661cf4751d641a43e8d2977318064b3b01a124dc4787e5b340fca73f"} Jan 26 17:50:04 crc kubenswrapper[4787]: I0126 17:50:04.164159 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlbjr"] Jan 26 17:50:04 crc kubenswrapper[4787]: W0126 17:50:04.193890 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17348c52_3049_45fc_8d0f_1be5551be571.slice/crio-eb9f1e7e307603dcd76086895d2b924dc1b1e754bbe73277db8f55de1ffdafa0 WatchSource:0}: Error finding container eb9f1e7e307603dcd76086895d2b924dc1b1e754bbe73277db8f55de1ffdafa0: Status 404 returned error can't find the container with id eb9f1e7e307603dcd76086895d2b924dc1b1e754bbe73277db8f55de1ffdafa0 Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.053916 4787 generic.go:334] "Generic (PLEG): container finished" podID="3bbf5ecf-4b0c-4a79-8024-9b50976c66ba" containerID="7393fc51b78dbcb37ea1694c030ea5e5157eab4d9628cb89cc97ec5094a45de4" exitCode=0 Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.054012 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c48d" event={"ID":"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba","Type":"ContainerDied","Data":"7393fc51b78dbcb37ea1694c030ea5e5157eab4d9628cb89cc97ec5094a45de4"} Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.059653 4787 generic.go:334] "Generic (PLEG): container finished" podID="17348c52-3049-45fc-8d0f-1be5551be571" containerID="15ed0e42fa4fdd9612443fdd9c0fc4602770453d6edf730c7b37a30e26ebb123" exitCode=0 Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.060115 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlbjr" event={"ID":"17348c52-3049-45fc-8d0f-1be5551be571","Type":"ContainerDied","Data":"15ed0e42fa4fdd9612443fdd9c0fc4602770453d6edf730c7b37a30e26ebb123"} Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.060157 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlbjr" event={"ID":"17348c52-3049-45fc-8d0f-1be5551be571","Type":"ContainerStarted","Data":"eb9f1e7e307603dcd76086895d2b924dc1b1e754bbe73277db8f55de1ffdafa0"} Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.659849 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5n8jx"] Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.660810 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.662238 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.670128 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5n8jx"] Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.760365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fpdx\" (UniqueName: \"kubernetes.io/projected/4a7b6b2d-d416-469d-9993-48859b6421ea-kube-api-access-5fpdx\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.760493 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7b6b2d-d416-469d-9993-48859b6421ea-catalog-content\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.760522 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7b6b2d-d416-469d-9993-48859b6421ea-utilities\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.859489 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4pl9"] Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.860470 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.861404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7b6b2d-d416-469d-9993-48859b6421ea-catalog-content\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.861466 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7b6b2d-d416-469d-9993-48859b6421ea-utilities\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.861504 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-utilities\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.861539 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfww\" (UniqueName: \"kubernetes.io/projected/73da2f06-dbb9-430a-8067-d5396afebf85-kube-api-access-ggfww\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.861572 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-catalog-content\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.861697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fpdx\" (UniqueName: \"kubernetes.io/projected/4a7b6b2d-d416-469d-9993-48859b6421ea-kube-api-access-5fpdx\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.862176 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7b6b2d-d416-469d-9993-48859b6421ea-catalog-content\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.862204 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7b6b2d-d416-469d-9993-48859b6421ea-utilities\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.862538 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.868829 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4pl9"] Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.885314 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fpdx\" (UniqueName: \"kubernetes.io/projected/4a7b6b2d-d416-469d-9993-48859b6421ea-kube-api-access-5fpdx\") pod \"certified-operators-5n8jx\" (UID: \"4a7b6b2d-d416-469d-9993-48859b6421ea\") " pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.962588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-utilities\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.962643 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggfww\" (UniqueName: \"kubernetes.io/projected/73da2f06-dbb9-430a-8067-d5396afebf85-kube-api-access-ggfww\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.962669 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-catalog-content\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.963139 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-catalog-content\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.963295 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-utilities\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.979533 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:05 crc kubenswrapper[4787]: I0126 17:50:05.985792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggfww\" (UniqueName: \"kubernetes.io/projected/73da2f06-dbb9-430a-8067-d5396afebf85-kube-api-access-ggfww\") pod \"community-operators-f4pl9\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:06 crc kubenswrapper[4787]: I0126 17:50:06.066682 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlbjr" event={"ID":"17348c52-3049-45fc-8d0f-1be5551be571","Type":"ContainerStarted","Data":"fd7c3b8ee2fe38f49cc1349bf28d7c04b6eb19ec0e97ac78c2bfa205131c91c1"} Jan 26 17:50:06 crc kubenswrapper[4787]: I0126 17:50:06.200047 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:06 crc kubenswrapper[4787]: I0126 17:50:06.399922 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5n8jx"] Jan 26 17:50:06 crc kubenswrapper[4787]: W0126 17:50:06.407267 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a7b6b2d_d416_469d_9993_48859b6421ea.slice/crio-009638404d88d1d392ff712579f51d8ea80848b882952a513a4b2fc68d7ec6a7 WatchSource:0}: Error finding container 009638404d88d1d392ff712579f51d8ea80848b882952a513a4b2fc68d7ec6a7: Status 404 returned error can't find the container with id 009638404d88d1d392ff712579f51d8ea80848b882952a513a4b2fc68d7ec6a7 Jan 26 17:50:06 crc kubenswrapper[4787]: I0126 17:50:06.598979 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4pl9"] Jan 26 17:50:06 crc kubenswrapper[4787]: W0126 17:50:06.631555 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73da2f06_dbb9_430a_8067_d5396afebf85.slice/crio-d9c529be4d49430d6fee5655165e4ef9ed094693e04e50ebc1de409d86ec4d1d WatchSource:0}: Error finding container d9c529be4d49430d6fee5655165e4ef9ed094693e04e50ebc1de409d86ec4d1d: Status 404 returned error can't find the container with id d9c529be4d49430d6fee5655165e4ef9ed094693e04e50ebc1de409d86ec4d1d Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.072223 4787 generic.go:334] "Generic (PLEG): container finished" podID="73da2f06-dbb9-430a-8067-d5396afebf85" containerID="863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b" exitCode=0 Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.072350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4pl9" event={"ID":"73da2f06-dbb9-430a-8067-d5396afebf85","Type":"ContainerDied","Data":"863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b"} Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.072417 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4pl9" event={"ID":"73da2f06-dbb9-430a-8067-d5396afebf85","Type":"ContainerStarted","Data":"d9c529be4d49430d6fee5655165e4ef9ed094693e04e50ebc1de409d86ec4d1d"} Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.074896 4787 generic.go:334] "Generic (PLEG): container finished" podID="4a7b6b2d-d416-469d-9993-48859b6421ea" containerID="a201e4228f42095954bc8033545778b0d706c65327e83e3869cb6595de9187c7" exitCode=0 Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.074975 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n8jx" event={"ID":"4a7b6b2d-d416-469d-9993-48859b6421ea","Type":"ContainerDied","Data":"a201e4228f42095954bc8033545778b0d706c65327e83e3869cb6595de9187c7"} Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.075011 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n8jx" event={"ID":"4a7b6b2d-d416-469d-9993-48859b6421ea","Type":"ContainerStarted","Data":"009638404d88d1d392ff712579f51d8ea80848b882952a513a4b2fc68d7ec6a7"} Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.077556 4787 generic.go:334] "Generic (PLEG): container finished" podID="17348c52-3049-45fc-8d0f-1be5551be571" containerID="fd7c3b8ee2fe38f49cc1349bf28d7c04b6eb19ec0e97ac78c2bfa205131c91c1" exitCode=0 Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.077649 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlbjr" event={"ID":"17348c52-3049-45fc-8d0f-1be5551be571","Type":"ContainerDied","Data":"fd7c3b8ee2fe38f49cc1349bf28d7c04b6eb19ec0e97ac78c2bfa205131c91c1"} Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.081970 4787 generic.go:334] "Generic (PLEG): container finished" podID="3bbf5ecf-4b0c-4a79-8024-9b50976c66ba" containerID="6671b9ab75f74dc6ee9aa4d33079d334be4845dbe16860ac340f7fa16ec2bd2c" exitCode=0 Jan 26 17:50:07 crc kubenswrapper[4787]: I0126 17:50:07.082010 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c48d" event={"ID":"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba","Type":"ContainerDied","Data":"6671b9ab75f74dc6ee9aa4d33079d334be4845dbe16860ac340f7fa16ec2bd2c"} Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.088600 4787 generic.go:334] "Generic (PLEG): container finished" podID="73da2f06-dbb9-430a-8067-d5396afebf85" containerID="77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a" exitCode=0 Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.088780 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4pl9" event={"ID":"73da2f06-dbb9-430a-8067-d5396afebf85","Type":"ContainerDied","Data":"77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a"} Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.090862 4787 generic.go:334] "Generic (PLEG): container finished" podID="4a7b6b2d-d416-469d-9993-48859b6421ea" containerID="c203360eb5cccdb352397f680f8f5662dd88d833d906ed5290b4a602f380d47c" exitCode=0 Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.090926 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n8jx" event={"ID":"4a7b6b2d-d416-469d-9993-48859b6421ea","Type":"ContainerDied","Data":"c203360eb5cccdb352397f680f8f5662dd88d833d906ed5290b4a602f380d47c"} Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.095961 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlbjr" event={"ID":"17348c52-3049-45fc-8d0f-1be5551be571","Type":"ContainerStarted","Data":"ee28b4d7e0aa358cacaa89d2f9cd6f616866a2d10b936de994d08e7a8e81b3a9"} Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.098009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c48d" event={"ID":"3bbf5ecf-4b0c-4a79-8024-9b50976c66ba","Type":"ContainerStarted","Data":"a472642543cd5e295ed503c11bc1d697e9aa8839dd96833f7f726e960e5af73f"} Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.125688 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5c48d" podStartSLOduration=2.687793222 podStartE2EDuration="5.12566611s" podCreationTimestamp="2026-01-26 17:50:03 +0000 UTC" firstStartedPulling="2026-01-26 17:50:05.056239222 +0000 UTC m=+373.763375365" lastFinishedPulling="2026-01-26 17:50:07.49411211 +0000 UTC m=+376.201248253" observedRunningTime="2026-01-26 17:50:08.123209469 +0000 UTC m=+376.830345602" watchObservedRunningTime="2026-01-26 17:50:08.12566611 +0000 UTC m=+376.832802243" Jan 26 17:50:08 crc kubenswrapper[4787]: I0126 17:50:08.141815 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jlbjr" podStartSLOduration=2.750500024 podStartE2EDuration="5.14179404s" podCreationTimestamp="2026-01-26 17:50:03 +0000 UTC" firstStartedPulling="2026-01-26 17:50:05.064488121 +0000 UTC m=+373.771624264" lastFinishedPulling="2026-01-26 17:50:07.455782147 +0000 UTC m=+376.162918280" observedRunningTime="2026-01-26 17:50:08.14178935 +0000 UTC m=+376.848925503" watchObservedRunningTime="2026-01-26 17:50:08.14179404 +0000 UTC m=+376.848930163" Jan 26 17:50:09 crc kubenswrapper[4787]: I0126 17:50:09.109144 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4pl9" event={"ID":"73da2f06-dbb9-430a-8067-d5396afebf85","Type":"ContainerStarted","Data":"72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4"} Jan 26 17:50:09 crc kubenswrapper[4787]: I0126 17:50:09.111709 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n8jx" event={"ID":"4a7b6b2d-d416-469d-9993-48859b6421ea","Type":"ContainerStarted","Data":"5495b0cd6646ee2eccf75f956ff601f53e5f550743cca01981d474a5586d3263"} Jan 26 17:50:09 crc kubenswrapper[4787]: I0126 17:50:09.127753 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4pl9" podStartSLOduration=2.624616214 podStartE2EDuration="4.127737691s" podCreationTimestamp="2026-01-26 17:50:05 +0000 UTC" firstStartedPulling="2026-01-26 17:50:07.07401332 +0000 UTC m=+375.781149453" lastFinishedPulling="2026-01-26 17:50:08.577134797 +0000 UTC m=+377.284270930" observedRunningTime="2026-01-26 17:50:09.127601138 +0000 UTC m=+377.834737271" watchObservedRunningTime="2026-01-26 17:50:09.127737691 +0000 UTC m=+377.834873824" Jan 26 17:50:10 crc kubenswrapper[4787]: I0126 17:50:10.878321 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5n8jx" podStartSLOduration=4.177111675 podStartE2EDuration="5.878297463s" podCreationTimestamp="2026-01-26 17:50:05 +0000 UTC" firstStartedPulling="2026-01-26 17:50:07.076101893 +0000 UTC m=+375.783238036" lastFinishedPulling="2026-01-26 17:50:08.777287691 +0000 UTC m=+377.484423824" observedRunningTime="2026-01-26 17:50:09.149617278 +0000 UTC m=+377.856753421" watchObservedRunningTime="2026-01-26 17:50:10.878297463 +0000 UTC m=+379.585433586" Jan 26 17:50:10 crc kubenswrapper[4787]: I0126 17:50:10.880849 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-nlmwd"] Jan 26 17:50:10 crc kubenswrapper[4787]: I0126 17:50:10.881195 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" podUID="e957e72b-a032-4c37-930d-93f51e5c0f5a" containerName="controller-manager" containerID="cri-o://53530ff87d28d044d6b50177ad1366d4fcdbaa69641f6f48c967470d1526aa01" gracePeriod=30 Jan 26 17:50:12 crc kubenswrapper[4787]: I0126 17:50:12.127203 4787 generic.go:334] "Generic (PLEG): container finished" podID="e957e72b-a032-4c37-930d-93f51e5c0f5a" containerID="53530ff87d28d044d6b50177ad1366d4fcdbaa69641f6f48c967470d1526aa01" exitCode=0 Jan 26 17:50:12 crc kubenswrapper[4787]: I0126 17:50:12.127303 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" event={"ID":"e957e72b-a032-4c37-930d-93f51e5c0f5a","Type":"ContainerDied","Data":"53530ff87d28d044d6b50177ad1366d4fcdbaa69641f6f48c967470d1526aa01"} Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.431657 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wqp92"] Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.432581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.456609 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wqp92"] Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558164 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jhhk\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-kube-api-access-8jhhk\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558203 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-registry-certificates\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558238 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558291 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-bound-sa-token\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558443 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558502 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-trusted-ca\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.558592 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-registry-tls\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.578966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.603244 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.603482 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.649764 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659488 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-registry-tls\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659546 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jhhk\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-kube-api-access-8jhhk\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659568 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-registry-certificates\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659624 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-bound-sa-token\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659662 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.659680 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-trusted-ca\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.660980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-trusted-ca\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.662787 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.668602 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-registry-certificates\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.668688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-registry-tls\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.671021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.684578 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jhhk\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-kube-api-access-8jhhk\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.691196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c55943d4-f0e5-4777-a6f1-e9b73b6c80b3-bound-sa-token\") pod \"image-registry-66df7c8f76-wqp92\" (UID: \"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.747653 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.782172 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.782414 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.812571 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.830563 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.845054 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-794c98fcf4-wb652"] Jan 26 17:50:13 crc kubenswrapper[4787]: E0126 17:50:13.845871 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e957e72b-a032-4c37-930d-93f51e5c0f5a" containerName="controller-manager" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.845891 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e957e72b-a032-4c37-930d-93f51e5c0f5a" containerName="controller-manager" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.846025 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e957e72b-a032-4c37-930d-93f51e5c0f5a" containerName="controller-manager" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.846477 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.857315 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-794c98fcf4-wb652"] Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963443 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9x52\" (UniqueName: \"kubernetes.io/projected/e957e72b-a032-4c37-930d-93f51e5c0f5a-kube-api-access-p9x52\") pod \"e957e72b-a032-4c37-930d-93f51e5c0f5a\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963513 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-client-ca\") pod \"e957e72b-a032-4c37-930d-93f51e5c0f5a\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-proxy-ca-bundles\") pod \"e957e72b-a032-4c37-930d-93f51e5c0f5a\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963587 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-config\") pod \"e957e72b-a032-4c37-930d-93f51e5c0f5a\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e957e72b-a032-4c37-930d-93f51e5c0f5a-serving-cert\") pod \"e957e72b-a032-4c37-930d-93f51e5c0f5a\" (UID: \"e957e72b-a032-4c37-930d-93f51e5c0f5a\") " Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963827 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-proxy-ca-bundles\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963880 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-client-ca\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963906 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kxx\" (UniqueName: \"kubernetes.io/projected/59d4f212-a20d-436d-84a9-b3dfd500156e-kube-api-access-t7kxx\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59d4f212-a20d-436d-84a9-b3dfd500156e-serving-cert\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.963940 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-config\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.966254 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-config" (OuterVolumeSpecName: "config") pod "e957e72b-a032-4c37-930d-93f51e5c0f5a" (UID: "e957e72b-a032-4c37-930d-93f51e5c0f5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.966496 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e957e72b-a032-4c37-930d-93f51e5c0f5a" (UID: "e957e72b-a032-4c37-930d-93f51e5c0f5a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.967640 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-client-ca" (OuterVolumeSpecName: "client-ca") pod "e957e72b-a032-4c37-930d-93f51e5c0f5a" (UID: "e957e72b-a032-4c37-930d-93f51e5c0f5a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.988268 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e957e72b-a032-4c37-930d-93f51e5c0f5a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e957e72b-a032-4c37-930d-93f51e5c0f5a" (UID: "e957e72b-a032-4c37-930d-93f51e5c0f5a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:50:13 crc kubenswrapper[4787]: I0126 17:50:13.988368 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e957e72b-a032-4c37-930d-93f51e5c0f5a-kube-api-access-p9x52" (OuterVolumeSpecName: "kube-api-access-p9x52") pod "e957e72b-a032-4c37-930d-93f51e5c0f5a" (UID: "e957e72b-a032-4c37-930d-93f51e5c0f5a"). InnerVolumeSpecName "kube-api-access-p9x52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.018021 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wqp92"] Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-client-ca\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kxx\" (UniqueName: \"kubernetes.io/projected/59d4f212-a20d-436d-84a9-b3dfd500156e-kube-api-access-t7kxx\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59d4f212-a20d-436d-84a9-b3dfd500156e-serving-cert\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-config\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065228 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-proxy-ca-bundles\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065272 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065282 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e957e72b-a032-4c37-930d-93f51e5c0f5a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065291 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9x52\" (UniqueName: \"kubernetes.io/projected/e957e72b-a032-4c37-930d-93f51e5c0f5a-kube-api-access-p9x52\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065301 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.065310 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e957e72b-a032-4c37-930d-93f51e5c0f5a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.066191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-client-ca\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.066248 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-proxy-ca-bundles\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.066930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d4f212-a20d-436d-84a9-b3dfd500156e-config\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.071449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59d4f212-a20d-436d-84a9-b3dfd500156e-serving-cert\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.081485 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kxx\" (UniqueName: \"kubernetes.io/projected/59d4f212-a20d-436d-84a9-b3dfd500156e-kube-api-access-t7kxx\") pod \"controller-manager-794c98fcf4-wb652\" (UID: \"59d4f212-a20d-436d-84a9-b3dfd500156e\") " pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.137124 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" event={"ID":"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3","Type":"ContainerStarted","Data":"2348b0766f4cef7772e5d6335b738b27806772b7ae20bb440c20424c9c440b6e"} Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.138993 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" event={"ID":"e957e72b-a032-4c37-930d-93f51e5c0f5a","Type":"ContainerDied","Data":"d3edacd08ad4f0bd891d704e748cee37791f8c5f28e80ed37531c3c937126ac0"} Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.139064 4787 scope.go:117] "RemoveContainer" containerID="53530ff87d28d044d6b50177ad1366d4fcdbaa69641f6f48c967470d1526aa01" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.139108 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-nlmwd" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.174290 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.177975 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5c48d" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.179538 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-nlmwd"] Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.181311 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jlbjr" Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.185247 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-nlmwd"] Jan 26 17:50:14 crc kubenswrapper[4787]: I0126 17:50:14.572639 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-794c98fcf4-wb652"] Jan 26 17:50:14 crc kubenswrapper[4787]: W0126 17:50:14.582661 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59d4f212_a20d_436d_84a9_b3dfd500156e.slice/crio-119641d301335ec389bf6d57599a92253fc0d5c321339cfd0cb3768d92a6d90e WatchSource:0}: Error finding container 119641d301335ec389bf6d57599a92253fc0d5c321339cfd0cb3768d92a6d90e: Status 404 returned error can't find the container with id 119641d301335ec389bf6d57599a92253fc0d5c321339cfd0cb3768d92a6d90e Jan 26 17:50:15 crc kubenswrapper[4787]: I0126 17:50:15.148109 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" event={"ID":"59d4f212-a20d-436d-84a9-b3dfd500156e","Type":"ContainerStarted","Data":"119641d301335ec389bf6d57599a92253fc0d5c321339cfd0cb3768d92a6d90e"} Jan 26 17:50:15 crc kubenswrapper[4787]: I0126 17:50:15.595376 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e957e72b-a032-4c37-930d-93f51e5c0f5a" path="/var/lib/kubelet/pods/e957e72b-a032-4c37-930d-93f51e5c0f5a/volumes" Jan 26 17:50:15 crc kubenswrapper[4787]: I0126 17:50:15.981173 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:15 crc kubenswrapper[4787]: I0126 17:50:15.981224 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.022224 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.155387 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" event={"ID":"59d4f212-a20d-436d-84a9-b3dfd500156e","Type":"ContainerStarted","Data":"7fa928d723e684b16a3c4583105ef65caa17a0d7a7ec3d36772ce3308e4349d6"} Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.155755 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.157990 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" event={"ID":"c55943d4-f0e5-4777-a6f1-e9b73b6c80b3","Type":"ContainerStarted","Data":"5a6c14e663a8f80fae8925aec2dfd6627d791ad5b9bda8c7ac56060c5c0baefa"} Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.161043 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.176918 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-794c98fcf4-wb652" podStartSLOduration=6.176900689 podStartE2EDuration="6.176900689s" podCreationTimestamp="2026-01-26 17:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:50:16.174261962 +0000 UTC m=+384.881398095" watchObservedRunningTime="2026-01-26 17:50:16.176900689 +0000 UTC m=+384.884036832" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.205088 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5n8jx" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.205148 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.206255 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.232742 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" podStartSLOduration=3.232725696 podStartE2EDuration="3.232725696s" podCreationTimestamp="2026-01-26 17:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:50:16.21945191 +0000 UTC m=+384.926588043" watchObservedRunningTime="2026-01-26 17:50:16.232725696 +0000 UTC m=+384.939861819" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.246260 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.808335 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:50:16 crc kubenswrapper[4787]: I0126 17:50:16.808398 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:50:17 crc kubenswrapper[4787]: I0126 17:50:17.162448 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:17 crc kubenswrapper[4787]: I0126 17:50:17.198753 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 17:50:33 crc kubenswrapper[4787]: I0126 17:50:33.753381 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wqp92" Jan 26 17:50:33 crc kubenswrapper[4787]: I0126 17:50:33.799513 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b7v9x"] Jan 26 17:50:46 crc kubenswrapper[4787]: I0126 17:50:46.807198 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:50:46 crc kubenswrapper[4787]: I0126 17:50:46.807596 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:50:46 crc kubenswrapper[4787]: I0126 17:50:46.807630 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:50:46 crc kubenswrapper[4787]: I0126 17:50:46.809466 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba7264e4540695a88937d861739646936874b439b552b6bba19ee9dc46246481"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:50:46 crc kubenswrapper[4787]: I0126 17:50:46.809639 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://ba7264e4540695a88937d861739646936874b439b552b6bba19ee9dc46246481" gracePeriod=600 Jan 26 17:50:49 crc kubenswrapper[4787]: I0126 17:50:49.361093 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="ba7264e4540695a88937d861739646936874b439b552b6bba19ee9dc46246481" exitCode=0 Jan 26 17:50:49 crc kubenswrapper[4787]: I0126 17:50:49.361187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"ba7264e4540695a88937d861739646936874b439b552b6bba19ee9dc46246481"} Jan 26 17:50:49 crc kubenswrapper[4787]: I0126 17:50:49.361413 4787 scope.go:117] "RemoveContainer" containerID="21ce515abd9ca0bc726c22d8b9e9cbcda6261a9cf2ef6fe5215accaa11e3c87a" Jan 26 17:50:50 crc kubenswrapper[4787]: I0126 17:50:50.367295 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"f650ee61520498599c3b5c68c12d1d32bfaabde165ca884cb5a77f30320f32f1"} Jan 26 17:50:58 crc kubenswrapper[4787]: I0126 17:50:58.850587 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" podUID="68eeb1f1-7fc1-49a4-a56e-40f06deac48a" containerName="registry" containerID="cri-o://37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d" gracePeriod=30 Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.205184 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.291827 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.291919 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-ca-trust-extracted\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.291992 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-certificates\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.292013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-bound-sa-token\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.292070 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-trusted-ca\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.292092 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-kube-api-access-b2dvz\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.292144 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-installation-pull-secrets\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.292165 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-tls\") pod \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\" (UID: \"68eeb1f1-7fc1-49a4-a56e-40f06deac48a\") " Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.293115 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.293398 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.298107 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.298201 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.299062 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-kube-api-access-b2dvz" (OuterVolumeSpecName: "kube-api-access-b2dvz") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "kube-api-access-b2dvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.299382 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.300546 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.311223 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "68eeb1f1-7fc1-49a4-a56e-40f06deac48a" (UID: "68eeb1f1-7fc1-49a4-a56e-40f06deac48a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393149 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-kube-api-access-b2dvz\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393174 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393184 4787 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393192 4787 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393200 4787 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393208 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.393215 4787 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68eeb1f1-7fc1-49a4-a56e-40f06deac48a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.418202 4787 generic.go:334] "Generic (PLEG): container finished" podID="68eeb1f1-7fc1-49a4-a56e-40f06deac48a" containerID="37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d" exitCode=0 Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.418273 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.418260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" event={"ID":"68eeb1f1-7fc1-49a4-a56e-40f06deac48a","Type":"ContainerDied","Data":"37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d"} Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.418414 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b7v9x" event={"ID":"68eeb1f1-7fc1-49a4-a56e-40f06deac48a","Type":"ContainerDied","Data":"f61a70ce2598092ad10713e21f76b8bcdccc9457e74ef307753f2600f52c1b44"} Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.418433 4787 scope.go:117] "RemoveContainer" containerID="37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.436185 4787 scope.go:117] "RemoveContainer" containerID="37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d" Jan 26 17:50:59 crc kubenswrapper[4787]: E0126 17:50:59.437070 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d\": container with ID starting with 37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d not found: ID does not exist" containerID="37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.437105 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d"} err="failed to get container status \"37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d\": rpc error: code = NotFound desc = could not find container \"37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d\": container with ID starting with 37c56cb9c4a8fb9811622fc2d00db2b36915a9ef71229a26d7f640f1244cfa2d not found: ID does not exist" Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.460970 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b7v9x"] Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.466233 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b7v9x"] Jan 26 17:50:59 crc kubenswrapper[4787]: I0126 17:50:59.599101 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68eeb1f1-7fc1-49a4-a56e-40f06deac48a" path="/var/lib/kubelet/pods/68eeb1f1-7fc1-49a4-a56e-40f06deac48a/volumes" Jan 26 17:52:51 crc kubenswrapper[4787]: I0126 17:52:51.843103 4787 scope.go:117] "RemoveContainer" containerID="5951f3324050f06d493a28dff046c1e7c2efc725e88772fcb2982d4e99c9b4df" Jan 26 17:52:51 crc kubenswrapper[4787]: I0126 17:52:51.877637 4787 scope.go:117] "RemoveContainer" containerID="4d0c15b5790606565321b7415acecc23c774182cd30e504653186f6bf9fcab94" Jan 26 17:52:51 crc kubenswrapper[4787]: I0126 17:52:51.907447 4787 scope.go:117] "RemoveContainer" containerID="69d16a82358a678816fc279a3b99ecf811a48422af0d2bf5d291d3db8c983fc7" Jan 26 17:53:16 crc kubenswrapper[4787]: I0126 17:53:16.808147 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:53:16 crc kubenswrapper[4787]: I0126 17:53:16.808562 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:46 crc kubenswrapper[4787]: I0126 17:53:46.807994 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:53:46 crc kubenswrapper[4787]: I0126 17:53:46.808430 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:51 crc kubenswrapper[4787]: I0126 17:53:51.937440 4787 scope.go:117] "RemoveContainer" containerID="1c6f765e41e377135181b0fcef7374b47ad7d484455c81ae124a3644aee59a36" Jan 26 17:53:51 crc kubenswrapper[4787]: I0126 17:53:51.957351 4787 scope.go:117] "RemoveContainer" containerID="a71f82b48d2281275b4460e75cdf1c06b39271ce2222073e74151f540c45f265" Jan 26 17:53:51 crc kubenswrapper[4787]: I0126 17:53:51.972139 4787 scope.go:117] "RemoveContainer" containerID="fa5dfd7f1dd4889fc03b78c45ae695901d21a869e1187cc8f3da911b69c0ec3b" Jan 26 17:53:51 crc kubenswrapper[4787]: I0126 17:53:51.993355 4787 scope.go:117] "RemoveContainer" containerID="dc0813857e7f1eb6087ffa648d7651526d9143930fdbd312b2c3b5a782358ce9" Jan 26 17:53:52 crc kubenswrapper[4787]: I0126 17:53:52.010121 4787 scope.go:117] "RemoveContainer" containerID="a44eed48ccc820b3077cf22066b0d9100b86bf8b27e2793e1823220e25feba6e" Jan 26 17:53:52 crc kubenswrapper[4787]: I0126 17:53:52.027799 4787 scope.go:117] "RemoveContainer" containerID="22ae17ead73697363ecf39bb80e9a43639802f5d774d652bb1baa63a8bbd11e4" Jan 26 17:54:16 crc kubenswrapper[4787]: I0126 17:54:16.808259 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:54:16 crc kubenswrapper[4787]: I0126 17:54:16.808777 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:54:16 crc kubenswrapper[4787]: I0126 17:54:16.808824 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:54:16 crc kubenswrapper[4787]: I0126 17:54:16.809391 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f650ee61520498599c3b5c68c12d1d32bfaabde165ca884cb5a77f30320f32f1"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:54:16 crc kubenswrapper[4787]: I0126 17:54:16.809455 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://f650ee61520498599c3b5c68c12d1d32bfaabde165ca884cb5a77f30320f32f1" gracePeriod=600 Jan 26 17:54:19 crc kubenswrapper[4787]: I0126 17:54:19.660767 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="f650ee61520498599c3b5c68c12d1d32bfaabde165ca884cb5a77f30320f32f1" exitCode=0 Jan 26 17:54:19 crc kubenswrapper[4787]: I0126 17:54:19.661369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"f650ee61520498599c3b5c68c12d1d32bfaabde165ca884cb5a77f30320f32f1"} Jan 26 17:54:19 crc kubenswrapper[4787]: I0126 17:54:19.661407 4787 scope.go:117] "RemoveContainer" containerID="ba7264e4540695a88937d861739646936874b439b552b6bba19ee9dc46246481" Jan 26 17:54:20 crc kubenswrapper[4787]: I0126 17:54:20.669999 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"db3aa5f8ecb00426491aefbe94533ea044737555893f85a79b932f0e6fb23390"} Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.356569 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-c9g8l"] Jan 26 17:55:30 crc kubenswrapper[4787]: E0126 17:55:30.357663 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68eeb1f1-7fc1-49a4-a56e-40f06deac48a" containerName="registry" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.357688 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="68eeb1f1-7fc1-49a4-a56e-40f06deac48a" containerName="registry" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.357877 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="68eeb1f1-7fc1-49a4-a56e-40f06deac48a" containerName="registry" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.358424 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.360600 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.360754 4787 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-n44nr" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.362486 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.362690 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.368677 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-c9g8l"] Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.515148 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-crc-storage\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.515365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvfl\" (UniqueName: \"kubernetes.io/projected/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-kube-api-access-lwvfl\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.515444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-node-mnt\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.616893 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvfl\" (UniqueName: \"kubernetes.io/projected/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-kube-api-access-lwvfl\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.617024 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-node-mnt\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.617077 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-crc-storage\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.617344 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-node-mnt\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.617839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-crc-storage\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.635527 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvfl\" (UniqueName: \"kubernetes.io/projected/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-kube-api-access-lwvfl\") pod \"crc-storage-crc-c9g8l\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.714208 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.894165 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-c9g8l"] Jan 26 17:55:30 crc kubenswrapper[4787]: I0126 17:55:30.902101 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:55:31 crc kubenswrapper[4787]: I0126 17:55:31.065112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-c9g8l" event={"ID":"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9","Type":"ContainerStarted","Data":"5303b07cf44f19530a75d712e9f974a47a3b3bd1f445a2cfa64d998df1a1c859"} Jan 26 17:55:33 crc kubenswrapper[4787]: I0126 17:55:33.078570 4787 generic.go:334] "Generic (PLEG): container finished" podID="7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" containerID="bfceaf62ddadb3718c46dd3c39b2e43c5f054ffec7d750806adeef0fdacd8701" exitCode=0 Jan 26 17:55:33 crc kubenswrapper[4787]: I0126 17:55:33.078635 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-c9g8l" event={"ID":"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9","Type":"ContainerDied","Data":"bfceaf62ddadb3718c46dd3c39b2e43c5f054ffec7d750806adeef0fdacd8701"} Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.358548 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.464629 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-crc-storage\") pod \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.464732 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvfl\" (UniqueName: \"kubernetes.io/projected/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-kube-api-access-lwvfl\") pod \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.464774 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-node-mnt\") pod \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\" (UID: \"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9\") " Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.465057 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" (UID: "7eff1bd9-25f9-4008-bae4-9854e7bcb8d9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.465340 4787 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.472509 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-kube-api-access-lwvfl" (OuterVolumeSpecName: "kube-api-access-lwvfl") pod "7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" (UID: "7eff1bd9-25f9-4008-bae4-9854e7bcb8d9"). InnerVolumeSpecName "kube-api-access-lwvfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.482294 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" (UID: "7eff1bd9-25f9-4008-bae4-9854e7bcb8d9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.567537 4787 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:34 crc kubenswrapper[4787]: I0126 17:55:34.567634 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvfl\" (UniqueName: \"kubernetes.io/projected/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9-kube-api-access-lwvfl\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:35 crc kubenswrapper[4787]: I0126 17:55:35.090352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-c9g8l" event={"ID":"7eff1bd9-25f9-4008-bae4-9854e7bcb8d9","Type":"ContainerDied","Data":"5303b07cf44f19530a75d712e9f974a47a3b3bd1f445a2cfa64d998df1a1c859"} Jan 26 17:55:35 crc kubenswrapper[4787]: I0126 17:55:35.090393 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c9g8l" Jan 26 17:55:35 crc kubenswrapper[4787]: I0126 17:55:35.090409 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5303b07cf44f19530a75d712e9f974a47a3b3bd1f445a2cfa64d998df1a1c859" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.267365 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2"] Jan 26 17:55:41 crc kubenswrapper[4787]: E0126 17:55:41.267765 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" containerName="storage" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.267777 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" containerName="storage" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.267868 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" containerName="storage" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.277428 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.281302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.289177 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2"] Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.455551 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.455610 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.455679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh78\" (UniqueName: \"kubernetes.io/projected/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-kube-api-access-vmh78\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.556790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh78\" (UniqueName: \"kubernetes.io/projected/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-kube-api-access-vmh78\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.556883 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.556916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.557774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.557877 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.590496 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh78\" (UniqueName: \"kubernetes.io/projected/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-kube-api-access-vmh78\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.602723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:41 crc kubenswrapper[4787]: I0126 17:55:41.824307 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2"] Jan 26 17:55:42 crc kubenswrapper[4787]: I0126 17:55:42.132324 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" event={"ID":"01dd99fb-59f5-4a7f-aa8e-73907ccf3077","Type":"ContainerStarted","Data":"09fde2748a50c8d3807d3502635c84b19ec6d18dfa03c98ac30794dc20d80a80"} Jan 26 17:55:42 crc kubenswrapper[4787]: I0126 17:55:42.132666 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" event={"ID":"01dd99fb-59f5-4a7f-aa8e-73907ccf3077","Type":"ContainerStarted","Data":"c3d942f9760d82dc92d22a4e2dea14a0ff64556dc48cdd178992c1ffbcc784f5"} Jan 26 17:55:44 crc kubenswrapper[4787]: I0126 17:55:44.396575 4787 generic.go:334] "Generic (PLEG): container finished" podID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerID="09fde2748a50c8d3807d3502635c84b19ec6d18dfa03c98ac30794dc20d80a80" exitCode=0 Jan 26 17:55:44 crc kubenswrapper[4787]: I0126 17:55:44.396664 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" event={"ID":"01dd99fb-59f5-4a7f-aa8e-73907ccf3077","Type":"ContainerDied","Data":"09fde2748a50c8d3807d3502635c84b19ec6d18dfa03c98ac30794dc20d80a80"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.175539 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cpbtq"] Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.175997 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-controller" containerID="cri-o://0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.176072 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="nbdb" containerID="cri-o://aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.176130 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="northd" containerID="cri-o://f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.176155 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.176400 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-node" containerID="cri-o://549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.176449 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-acl-logging" containerID="cri-o://ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.176581 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="sbdb" containerID="cri-o://157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.237230 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" containerID="cri-o://f68cfb6d7c911e9a9cee20fb12a2a1d01f7639c5f08d063e61c29dc4f8da7bc2" gracePeriod=30 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.410777 4787 generic.go:334] "Generic (PLEG): container finished" podID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerID="7f65159382bbf1798abfb5819bed277bd56529f58b260b6214ae030058c6f4b9" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.410832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" event={"ID":"01dd99fb-59f5-4a7f-aa8e-73907ccf3077","Type":"ContainerDied","Data":"7f65159382bbf1798abfb5819bed277bd56529f58b260b6214ae030058c6f4b9"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.415920 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/2.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.417144 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/1.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.417179 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2e50ad1-82f9-48f0-a103-6d584a3fa02e" containerID="e090b2e5f3f08a1ac007a434bb7a458de7e6770d683fa1c18270c831b8aa5db5" exitCode=2 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.417228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerDied","Data":"e090b2e5f3f08a1ac007a434bb7a458de7e6770d683fa1c18270c831b8aa5db5"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.417256 4787 scope.go:117] "RemoveContainer" containerID="e2e517dbceca3d728dc183ac2d4b93dbbcd432cab8c7745a3bff77a2df0a3eff" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.417554 4787 scope.go:117] "RemoveContainer" containerID="e090b2e5f3f08a1ac007a434bb7a458de7e6770d683fa1c18270c831b8aa5db5" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.417692 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-65mpd_openshift-multus(d2e50ad1-82f9-48f0-a103-6d584a3fa02e)\"" pod="openshift-multus/multus-65mpd" podUID="d2e50ad1-82f9-48f0-a103-6d584a3fa02e" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.420856 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovnkube-controller/3.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.423329 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovn-acl-logging/0.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.423686 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovn-controller/0.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424083 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="f68cfb6d7c911e9a9cee20fb12a2a1d01f7639c5f08d063e61c29dc4f8da7bc2" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424105 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424114 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424122 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424131 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424138 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98" exitCode=0 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424144 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37" exitCode=143 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424151 4787 generic.go:334] "Generic (PLEG): container finished" podID="474c6821-f8c5-400e-a584-0d63c13e0655" containerID="0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac" exitCode=143 Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424168 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"f68cfb6d7c911e9a9cee20fb12a2a1d01f7639c5f08d063e61c29dc4f8da7bc2"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424189 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424198 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424207 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424222 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424230 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.424242 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac"} Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.468939 4787 scope.go:117] "RemoveContainer" containerID="54d6f87bfc19f8f5a1531c6ba15dfc4d65b2c1c0db2ef0478b8b3e753e628fbe" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.499331 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovn-acl-logging/0.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.500062 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovn-controller/0.log" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.500445 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554238 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-56ggz"] Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554501 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554524 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554534 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554542 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554552 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="northd" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554560 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="northd" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554573 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="nbdb" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554579 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="nbdb" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554589 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554597 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554607 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-node" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554612 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-node" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554620 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kubecfg-setup" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554625 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kubecfg-setup" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554632 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="sbdb" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554640 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="sbdb" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554647 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-acl-logging" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554653 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-acl-logging" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554662 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554667 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554675 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554680 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554765 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="sbdb" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554775 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554782 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554791 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554800 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554809 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-node" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554816 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-acl-logging" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554823 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="northd" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554829 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554836 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="nbdb" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554845 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovn-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554921 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554929 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: E0126 17:55:46.554940 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.554968 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.555062 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" containerName="ovnkube-controller" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.562362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624300 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-bin\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624569 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-ovn\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624700 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-node-log\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624828 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-log-socket\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624935 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-systemd-units\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625043 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-etc-openvswitch\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625125 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-openvswitch\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625216 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-slash\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-var-lib-openvswitch\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625419 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-config\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625494 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-ovn-kubernetes\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625572 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-kubelet\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625698 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-env-overrides\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625811 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-var-lib-cni-networks-ovn-kubernetes\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625922 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-systemd\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624459 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624602 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624736 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-node-log" (OuterVolumeSpecName: "node-log") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624893 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-log-socket" (OuterVolumeSpecName: "log-socket") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.624981 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625247 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625278 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-slash" (OuterVolumeSpecName: "host-slash") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625364 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625596 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626079 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-script-lib\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625887 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.625888 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626256 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/474c6821-f8c5-400e-a584-0d63c13e0655-ovn-node-metrics-cert\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626293 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jdfz\" (UniqueName: \"kubernetes.io/projected/474c6821-f8c5-400e-a584-0d63c13e0655-kube-api-access-2jdfz\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626315 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-netd\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626339 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-netns\") pod \"474c6821-f8c5-400e-a584-0d63c13e0655\" (UID: \"474c6821-f8c5-400e-a584-0d63c13e0655\") " Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626192 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626420 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626495 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626811 4787 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626843 4787 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626860 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626872 4787 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626882 4787 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.626963 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627053 4787 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627092 4787 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627121 4787 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627151 4787 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627182 4787 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627207 4787 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627235 4787 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627258 4787 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627283 4787 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627305 4787 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.627598 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.633719 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474c6821-f8c5-400e-a584-0d63c13e0655-kube-api-access-2jdfz" (OuterVolumeSpecName: "kube-api-access-2jdfz") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "kube-api-access-2jdfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.634872 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/474c6821-f8c5-400e-a584-0d63c13e0655-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.639441 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "474c6821-f8c5-400e-a584-0d63c13e0655" (UID: "474c6821-f8c5-400e-a584-0d63c13e0655"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.728224 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-ovnkube-config\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.728288 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c87becc0-71e4-4013-a359-4a12f3a8a061-ovn-node-metrics-cert\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.728693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-etc-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.728804 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-log-socket\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.728863 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-systemd\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.728906 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhfv\" (UniqueName: \"kubernetes.io/projected/c87becc0-71e4-4013-a359-4a12f3a8a061-kube-api-access-pmhfv\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729007 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-ovnkube-script-lib\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729062 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-slash\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729137 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-run-netns\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729168 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-var-lib-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729207 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729251 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729307 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-ovn\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729337 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-cni-netd\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729410 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-env-overrides\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-kubelet\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729479 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-run-ovn-kubernetes\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-node-log\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-cni-bin\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-systemd-units\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729692 4787 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/474c6821-f8c5-400e-a584-0d63c13e0655-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729873 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/474c6821-f8c5-400e-a584-0d63c13e0655-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.729986 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/474c6821-f8c5-400e-a584-0d63c13e0655-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.730077 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jdfz\" (UniqueName: \"kubernetes.io/projected/474c6821-f8c5-400e-a584-0d63c13e0655-kube-api-access-2jdfz\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830593 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830663 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-ovn\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830701 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-cni-netd\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-env-overrides\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-kubelet\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830813 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-run-ovn-kubernetes\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830851 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-node-log\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830879 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-cni-bin\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830911 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-systemd-units\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.830968 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-ovnkube-config\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c87becc0-71e4-4013-a359-4a12f3a8a061-ovn-node-metrics-cert\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831044 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-etc-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831080 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-log-socket\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831120 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-systemd\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhfv\" (UniqueName: \"kubernetes.io/projected/c87becc0-71e4-4013-a359-4a12f3a8a061-kube-api-access-pmhfv\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831219 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-ovnkube-script-lib\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831252 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-slash\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831297 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-run-netns\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-var-lib-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831346 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-kubelet\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831362 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831410 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831552 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-node-log\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831570 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-etc-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831614 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-cni-bin\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831628 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-log-socket\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831632 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-systemd-units\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.831640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-slash\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832153 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-env-overrides\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-systemd\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-var-lib-openvswitch\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832325 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-ovnkube-script-lib\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832339 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-run-netns\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832461 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-run-ovn-kubernetes\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832480 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-host-cni-netd\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c87becc0-71e4-4013-a359-4a12f3a8a061-run-ovn\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.832649 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c87becc0-71e4-4013-a359-4a12f3a8a061-ovnkube-config\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.838406 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c87becc0-71e4-4013-a359-4a12f3a8a061-ovn-node-metrics-cert\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.861449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhfv\" (UniqueName: \"kubernetes.io/projected/c87becc0-71e4-4013-a359-4a12f3a8a061-kube-api-access-pmhfv\") pod \"ovnkube-node-56ggz\" (UID: \"c87becc0-71e4-4013-a359-4a12f3a8a061\") " pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: I0126 17:55:46.875882 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:46 crc kubenswrapper[4787]: W0126 17:55:46.938126 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87becc0_71e4_4013_a359_4a12f3a8a061.slice/crio-da57b8c753940bd8d627c9c4ebc86749b6a5eae67839f48d0a798c668dc8f77c WatchSource:0}: Error finding container da57b8c753940bd8d627c9c4ebc86749b6a5eae67839f48d0a798c668dc8f77c: Status 404 returned error can't find the container with id da57b8c753940bd8d627c9c4ebc86749b6a5eae67839f48d0a798c668dc8f77c Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.435745 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovn-acl-logging/0.log" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.436448 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cpbtq_474c6821-f8c5-400e-a584-0d63c13e0655/ovn-controller/0.log" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.436848 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" event={"ID":"474c6821-f8c5-400e-a584-0d63c13e0655","Type":"ContainerDied","Data":"cf5ac270d0348bf884b300058e024f3784459a331a22fec9c385f9c59ee5a5c8"} Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.436890 4787 scope.go:117] "RemoveContainer" containerID="f68cfb6d7c911e9a9cee20fb12a2a1d01f7639c5f08d063e61c29dc4f8da7bc2" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.436964 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cpbtq" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.439438 4787 generic.go:334] "Generic (PLEG): container finished" podID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerID="fe936873b78694ab0530c3c0d6f63bc0b5685215142fbba8a0c43ae1d5afb2fc" exitCode=0 Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.439509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" event={"ID":"01dd99fb-59f5-4a7f-aa8e-73907ccf3077","Type":"ContainerDied","Data":"fe936873b78694ab0530c3c0d6f63bc0b5685215142fbba8a0c43ae1d5afb2fc"} Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.442516 4787 generic.go:334] "Generic (PLEG): container finished" podID="c87becc0-71e4-4013-a359-4a12f3a8a061" containerID="4485aafd2ad5206343d8a52e4c75714b0ea65c841d4d9973ca5103839b2a52b3" exitCode=0 Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.442581 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerDied","Data":"4485aafd2ad5206343d8a52e4c75714b0ea65c841d4d9973ca5103839b2a52b3"} Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.442601 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"da57b8c753940bd8d627c9c4ebc86749b6a5eae67839f48d0a798c668dc8f77c"} Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.447224 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/2.log" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.460207 4787 scope.go:117] "RemoveContainer" containerID="157ddb7d9bb4d31bfa80a095941b56d5617ad8c92ad1c34221b0c2bbba531485" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.477856 4787 scope.go:117] "RemoveContainer" containerID="aacb85d7743aaf52585d926fbb6bd2f1a6b7a8f97c30bdea019566c6496e43be" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.496352 4787 scope.go:117] "RemoveContainer" containerID="f848dbc8b067aff455d9dfa44d2ad554db420b8c159b582b068101afc4e3a44f" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.511480 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cpbtq"] Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.519475 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cpbtq"] Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.522155 4787 scope.go:117] "RemoveContainer" containerID="ee2ecf49f9cb5d2f98b8ab09dd852819c966d6e6f124a99a5f4c2fcd2a9fc84d" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.552664 4787 scope.go:117] "RemoveContainer" containerID="549eb031c5ee53e5dcd14f61c1558f3e7960b696fe6705c3079c44329c931c98" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.570241 4787 scope.go:117] "RemoveContainer" containerID="ac7567f15c2b4f6ee9f8c616b7107b47f86a12a01811a4d48dd90f5517815e37" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.587914 4787 scope.go:117] "RemoveContainer" containerID="0666d6303b0064e5a2508c07637949f21dc4c13e5e2d424e5612daa1dfd7d1ac" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.596312 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="474c6821-f8c5-400e-a584-0d63c13e0655" path="/var/lib/kubelet/pods/474c6821-f8c5-400e-a584-0d63c13e0655/volumes" Jan 26 17:55:47 crc kubenswrapper[4787]: I0126 17:55:47.604777 4787 scope.go:117] "RemoveContainer" containerID="f5b334b11b9f31eadf07452edca360828cb9eb6cf80173421622055e44c6fa82" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.454108 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"6037167029a97b6e8acb1d67b109ad9d6719dd1fedd8077ed863bd857dcd73ca"} Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.454187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"7b0d3d3aa0f59efa33e392d907fd023ad25f2bf328dac176b2b371cc67ee429e"} Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.454199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"1a386f7bc83dfa375e25a50fcdcf30c4cd4f45b61331c166be3b2231be06dd35"} Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.454209 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"027ee3fc91a23eb3308a7fc557bb0d069839bbd2e45c343ee7e71f256acf69ba"} Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.454218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"5ebe5d99c64e52a4fdf64d713b00dcca3ee94d0c81d77150052ccdcbaedb1c2d"} Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.454226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"d4625e4f73a9dce860a33f8e6c9994bdd9c7dd674d4e3f5c2ba32f43af2879c3"} Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.484797 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.652226 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-bundle\") pod \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.652319 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmh78\" (UniqueName: \"kubernetes.io/projected/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-kube-api-access-vmh78\") pod \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.652394 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-util\") pod \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\" (UID: \"01dd99fb-59f5-4a7f-aa8e-73907ccf3077\") " Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.653853 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-bundle" (OuterVolumeSpecName: "bundle") pod "01dd99fb-59f5-4a7f-aa8e-73907ccf3077" (UID: "01dd99fb-59f5-4a7f-aa8e-73907ccf3077"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.661338 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-kube-api-access-vmh78" (OuterVolumeSpecName: "kube-api-access-vmh78") pod "01dd99fb-59f5-4a7f-aa8e-73907ccf3077" (UID: "01dd99fb-59f5-4a7f-aa8e-73907ccf3077"). InnerVolumeSpecName "kube-api-access-vmh78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.674999 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-util" (OuterVolumeSpecName: "util") pod "01dd99fb-59f5-4a7f-aa8e-73907ccf3077" (UID: "01dd99fb-59f5-4a7f-aa8e-73907ccf3077"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.754791 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmh78\" (UniqueName: \"kubernetes.io/projected/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-kube-api-access-vmh78\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.754839 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:48 crc kubenswrapper[4787]: I0126 17:55:48.754857 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01dd99fb-59f5-4a7f-aa8e-73907ccf3077-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:49 crc kubenswrapper[4787]: I0126 17:55:49.465813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" event={"ID":"01dd99fb-59f5-4a7f-aa8e-73907ccf3077","Type":"ContainerDied","Data":"c3d942f9760d82dc92d22a4e2dea14a0ff64556dc48cdd178992c1ffbcc784f5"} Jan 26 17:55:49 crc kubenswrapper[4787]: I0126 17:55:49.465866 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3d942f9760d82dc92d22a4e2dea14a0ff64556dc48cdd178992c1ffbcc784f5" Jan 26 17:55:49 crc kubenswrapper[4787]: I0126 17:55:49.465978 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2" Jan 26 17:55:49 crc kubenswrapper[4787]: E0126 17:55:49.545492 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01dd99fb_59f5_4a7f_aa8e_73907ccf3077.slice\": RecentStats: unable to find data in memory cache]" Jan 26 17:55:51 crc kubenswrapper[4787]: I0126 17:55:51.479502 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"03c6493dbe9e0f786b2f38ddc7d392a1c9a7d058c364e1d767f708253149f52b"} Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.889659 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wbxfd"] Jan 26 17:55:52 crc kubenswrapper[4787]: E0126 17:55:52.889889 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="pull" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.889904 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="pull" Jan 26 17:55:52 crc kubenswrapper[4787]: E0126 17:55:52.889926 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="extract" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.889933 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="extract" Jan 26 17:55:52 crc kubenswrapper[4787]: E0126 17:55:52.889942 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="util" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.889966 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="util" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.890092 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="01dd99fb-59f5-4a7f-aa8e-73907ccf3077" containerName="extract" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.890490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.892824 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.892860 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 17:55:52 crc kubenswrapper[4787]: I0126 17:55:52.893106 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mcw5x" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.014970 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhwt\" (UniqueName: \"kubernetes.io/projected/386546ef-4d1a-47cc-badf-1bff4394dbf3-kube-api-access-qlhwt\") pod \"nmstate-operator-646758c888-wbxfd\" (UID: \"386546ef-4d1a-47cc-badf-1bff4394dbf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.115911 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhwt\" (UniqueName: \"kubernetes.io/projected/386546ef-4d1a-47cc-badf-1bff4394dbf3-kube-api-access-qlhwt\") pod \"nmstate-operator-646758c888-wbxfd\" (UID: \"386546ef-4d1a-47cc-badf-1bff4394dbf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.143835 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhwt\" (UniqueName: \"kubernetes.io/projected/386546ef-4d1a-47cc-badf-1bff4394dbf3-kube-api-access-qlhwt\") pod \"nmstate-operator-646758c888-wbxfd\" (UID: \"386546ef-4d1a-47cc-badf-1bff4394dbf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.236080 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.263036 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(093dae5f9fc7ef18e89b6851139e09e9f5289154ef14c5f55f4f58f5f17ca8e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.263118 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(093dae5f9fc7ef18e89b6851139e09e9f5289154ef14c5f55f4f58f5f17ca8e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.263140 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(093dae5f9fc7ef18e89b6851139e09e9f5289154ef14c5f55f4f58f5f17ca8e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.263191 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-wbxfd_openshift-nmstate(386546ef-4d1a-47cc-badf-1bff4394dbf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-wbxfd_openshift-nmstate(386546ef-4d1a-47cc-badf-1bff4394dbf3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(093dae5f9fc7ef18e89b6851139e09e9f5289154ef14c5f55f4f58f5f17ca8e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" podUID="386546ef-4d1a-47cc-badf-1bff4394dbf3" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.495631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" event={"ID":"c87becc0-71e4-4013-a359-4a12f3a8a061","Type":"ContainerStarted","Data":"a2f5958347957f01413441f6a27d1e27f1427c32245338aa194b93027d7b96a3"} Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.496003 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.496027 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.543345 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.546657 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" podStartSLOduration=7.546634486 podStartE2EDuration="7.546634486s" podCreationTimestamp="2026-01-26 17:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:55:53.544725777 +0000 UTC m=+722.251861910" watchObservedRunningTime="2026-01-26 17:55:53.546634486 +0000 UTC m=+722.253770619" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.835583 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wbxfd"] Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.835674 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: I0126 17:55:53.836069 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.854140 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(ebca29b9e4b1b56c0c912a5a60a5845ce076ea3574b72ef6d21d1e4c0b74fc8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.854199 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(ebca29b9e4b1b56c0c912a5a60a5845ce076ea3574b72ef6d21d1e4c0b74fc8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.854222 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(ebca29b9e4b1b56c0c912a5a60a5845ce076ea3574b72ef6d21d1e4c0b74fc8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:55:53 crc kubenswrapper[4787]: E0126 17:55:53.854267 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-wbxfd_openshift-nmstate(386546ef-4d1a-47cc-badf-1bff4394dbf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-wbxfd_openshift-nmstate(386546ef-4d1a-47cc-badf-1bff4394dbf3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(ebca29b9e4b1b56c0c912a5a60a5845ce076ea3574b72ef6d21d1e4c0b74fc8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" podUID="386546ef-4d1a-47cc-badf-1bff4394dbf3" Jan 26 17:55:54 crc kubenswrapper[4787]: I0126 17:55:54.501256 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:54 crc kubenswrapper[4787]: I0126 17:55:54.542654 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:55:58 crc kubenswrapper[4787]: I0126 17:55:58.589527 4787 scope.go:117] "RemoveContainer" containerID="e090b2e5f3f08a1ac007a434bb7a458de7e6770d683fa1c18270c831b8aa5db5" Jan 26 17:55:58 crc kubenswrapper[4787]: E0126 17:55:58.590577 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-65mpd_openshift-multus(d2e50ad1-82f9-48f0-a103-6d584a3fa02e)\"" pod="openshift-multus/multus-65mpd" podUID="d2e50ad1-82f9-48f0-a103-6d584a3fa02e" Jan 26 17:56:07 crc kubenswrapper[4787]: I0126 17:56:07.589241 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:56:07 crc kubenswrapper[4787]: I0126 17:56:07.590199 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:56:07 crc kubenswrapper[4787]: E0126 17:56:07.630846 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(06dee7cf0578c9955935d6b3cf1bdb766f0aa6e39aaf7822a050a586d8e0daee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:56:07 crc kubenswrapper[4787]: E0126 17:56:07.630931 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(06dee7cf0578c9955935d6b3cf1bdb766f0aa6e39aaf7822a050a586d8e0daee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:56:07 crc kubenswrapper[4787]: E0126 17:56:07.630987 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(06dee7cf0578c9955935d6b3cf1bdb766f0aa6e39aaf7822a050a586d8e0daee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:56:07 crc kubenswrapper[4787]: E0126 17:56:07.631049 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-wbxfd_openshift-nmstate(386546ef-4d1a-47cc-badf-1bff4394dbf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-wbxfd_openshift-nmstate(386546ef-4d1a-47cc-badf-1bff4394dbf3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-wbxfd_openshift-nmstate_386546ef-4d1a-47cc-badf-1bff4394dbf3_0(06dee7cf0578c9955935d6b3cf1bdb766f0aa6e39aaf7822a050a586d8e0daee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" podUID="386546ef-4d1a-47cc-badf-1bff4394dbf3" Jan 26 17:56:12 crc kubenswrapper[4787]: I0126 17:56:12.589844 4787 scope.go:117] "RemoveContainer" containerID="e090b2e5f3f08a1ac007a434bb7a458de7e6770d683fa1c18270c831b8aa5db5" Jan 26 17:56:13 crc kubenswrapper[4787]: I0126 17:56:13.605979 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65mpd_d2e50ad1-82f9-48f0-a103-6d584a3fa02e/kube-multus/2.log" Jan 26 17:56:13 crc kubenswrapper[4787]: I0126 17:56:13.606360 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65mpd" event={"ID":"d2e50ad1-82f9-48f0-a103-6d584a3fa02e","Type":"ContainerStarted","Data":"d621691a0e56251080a1abc7a7fa505e5b7661bd09efdd562226756320cf560f"} Jan 26 17:56:16 crc kubenswrapper[4787]: I0126 17:56:16.899456 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-56ggz" Jan 26 17:56:22 crc kubenswrapper[4787]: I0126 17:56:22.588805 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:56:22 crc kubenswrapper[4787]: I0126 17:56:22.590888 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" Jan 26 17:56:22 crc kubenswrapper[4787]: I0126 17:56:22.806725 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wbxfd"] Jan 26 17:56:23 crc kubenswrapper[4787]: I0126 17:56:23.661427 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" event={"ID":"386546ef-4d1a-47cc-badf-1bff4394dbf3","Type":"ContainerStarted","Data":"1eb375fffa609353ef1d415339c36d20f14e15d98a4bcdc8aa0d5eb2738a388f"} Jan 26 17:56:25 crc kubenswrapper[4787]: I0126 17:56:25.673367 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" event={"ID":"386546ef-4d1a-47cc-badf-1bff4394dbf3","Type":"ContainerStarted","Data":"e432c24776fe841f132170f9a80b224bd15303e69dacbb11be64b99e920e9744"} Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.724571 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-wbxfd" podStartSLOduration=32.424312871 podStartE2EDuration="34.724554389s" podCreationTimestamp="2026-01-26 17:55:52 +0000 UTC" firstStartedPulling="2026-01-26 17:56:22.81598755 +0000 UTC m=+751.523123683" lastFinishedPulling="2026-01-26 17:56:25.116229068 +0000 UTC m=+753.823365201" observedRunningTime="2026-01-26 17:56:25.691795772 +0000 UTC m=+754.398931905" watchObservedRunningTime="2026-01-26 17:56:26.724554389 +0000 UTC m=+755.431690522" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.726711 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rwqxq"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.727894 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.730055 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-48htr" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.744421 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rwqxq"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.756501 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.757376 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.760979 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.771334 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tjd5k"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.772126 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.785592 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.866907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fprz\" (UniqueName: \"kubernetes.io/projected/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-kube-api-access-2fprz\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.866973 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgm9\" (UniqueName: \"kubernetes.io/projected/f611b8d8-b794-4b15-bb02-25776ca06b96-kube-api-access-6lgm9\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.867058 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-nmstate-lock\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.867088 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-dbus-socket\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.867149 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t666z\" (UniqueName: \"kubernetes.io/projected/e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9-kube-api-access-t666z\") pod \"nmstate-metrics-54757c584b-rwqxq\" (UID: \"e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.867172 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.867203 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-ovs-socket\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.869167 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.869761 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.873442 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.873819 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.876916 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hlflg" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.878220 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck"] Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969231 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fprz\" (UniqueName: \"kubernetes.io/projected/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-kube-api-access-2fprz\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969416 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lgm9\" (UniqueName: \"kubernetes.io/projected/f611b8d8-b794-4b15-bb02-25776ca06b96-kube-api-access-6lgm9\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969498 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-nmstate-lock\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969545 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-dbus-socket\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969585 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t666z\" (UniqueName: \"kubernetes.io/projected/e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9-kube-api-access-t666z\") pod \"nmstate-metrics-54757c584b-rwqxq\" (UID: \"e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969614 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969621 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-nmstate-lock\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-ovs-socket\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969726 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-ovs-socket\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: E0126 17:56:26.969729 4787 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 17:56:26 crc kubenswrapper[4787]: E0126 17:56:26.969797 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-tls-key-pair podName:d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9 nodeName:}" failed. No retries permitted until 2026-01-26 17:56:27.469777621 +0000 UTC m=+756.176913754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-pk9n5" (UID: "d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9") : secret "openshift-nmstate-webhook" not found Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.969937 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f611b8d8-b794-4b15-bb02-25776ca06b96-dbus-socket\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:26 crc kubenswrapper[4787]: I0126 17:56:26.993771 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t666z\" (UniqueName: \"kubernetes.io/projected/e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9-kube-api-access-t666z\") pod \"nmstate-metrics-54757c584b-rwqxq\" (UID: \"e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.003211 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lgm9\" (UniqueName: \"kubernetes.io/projected/f611b8d8-b794-4b15-bb02-25776ca06b96-kube-api-access-6lgm9\") pod \"nmstate-handler-tjd5k\" (UID: \"f611b8d8-b794-4b15-bb02-25776ca06b96\") " pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.004711 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fprz\" (UniqueName: \"kubernetes.io/projected/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-kube-api-access-2fprz\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.048149 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.058061 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55c858d47f-w9mrm"] Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.058673 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.072642 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55c858d47f-w9mrm"] Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073520 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-oauth-serving-cert\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073588 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-oauth-config\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073614 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073638 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-serving-cert\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073661 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ch6\" (UniqueName: \"kubernetes.io/projected/e047ded0-9151-4083-bcac-3dbba5ca2eeb-kube-api-access-x8ch6\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073698 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-config\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-trusted-ca-bundle\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073837 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677kv\" (UniqueName: \"kubernetes.io/projected/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-kube-api-access-677kv\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.073890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-service-ca\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.099477 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.175068 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-oauth-serving-cert\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.175432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-oauth-config\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.175454 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.175621 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-serving-cert\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176289 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ch6\" (UniqueName: \"kubernetes.io/projected/e047ded0-9151-4083-bcac-3dbba5ca2eeb-kube-api-access-x8ch6\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176338 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-config\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-trusted-ca-bundle\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176428 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677kv\" (UniqueName: \"kubernetes.io/projected/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-kube-api-access-677kv\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176446 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-service-ca\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176933 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.176162 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-oauth-serving-cert\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.177881 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-service-ca\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.178426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-config\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.179128 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e047ded0-9151-4083-bcac-3dbba5ca2eeb-trusted-ca-bundle\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.182525 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-oauth-config\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.184395 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e047ded0-9151-4083-bcac-3dbba5ca2eeb-console-serving-cert\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.189178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.196690 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ch6\" (UniqueName: \"kubernetes.io/projected/e047ded0-9151-4083-bcac-3dbba5ca2eeb-kube-api-access-x8ch6\") pod \"console-55c858d47f-w9mrm\" (UID: \"e047ded0-9151-4083-bcac-3dbba5ca2eeb\") " pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.199239 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677kv\" (UniqueName: \"kubernetes.io/projected/e7890c8e-6fb8-42b4-a953-d5dfac5ed67a-kube-api-access-677kv\") pod \"nmstate-console-plugin-7754f76f8b-fcdck\" (UID: \"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.248730 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rwqxq"] Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.452298 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.479331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.483060 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pk9n5\" (UID: \"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.490797 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.631776 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55c858d47f-w9mrm"] Jan 26 17:56:27 crc kubenswrapper[4787]: W0126 17:56:27.638942 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode047ded0_9151_4083_bcac_3dbba5ca2eeb.slice/crio-2572c7c78a6f69fa6a27f5a2ff23701a5b9de635b85795e16205c9c36130590e WatchSource:0}: Error finding container 2572c7c78a6f69fa6a27f5a2ff23701a5b9de635b85795e16205c9c36130590e: Status 404 returned error can't find the container with id 2572c7c78a6f69fa6a27f5a2ff23701a5b9de635b85795e16205c9c36130590e Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.671525 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.698140 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tjd5k" event={"ID":"f611b8d8-b794-4b15-bb02-25776ca06b96","Type":"ContainerStarted","Data":"29ccfbdb624268605332c0ae53cb4016673c467d6099de97a06119d19e290085"} Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.701060 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" event={"ID":"e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9","Type":"ContainerStarted","Data":"a0d0dd86f5620d71692e1cd16e2a6652c62e6d243380bc39e27affebf6d519f5"} Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.701667 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck"] Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.702095 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55c858d47f-w9mrm" event={"ID":"e047ded0-9151-4083-bcac-3dbba5ca2eeb","Type":"ContainerStarted","Data":"2572c7c78a6f69fa6a27f5a2ff23701a5b9de635b85795e16205c9c36130590e"} Jan 26 17:56:27 crc kubenswrapper[4787]: W0126 17:56:27.716779 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7890c8e_6fb8_42b4_a953_d5dfac5ed67a.slice/crio-93855cfad0762066481a9a418ba81f9f6e3d4451cc42057cc12fc58d6b889da8 WatchSource:0}: Error finding container 93855cfad0762066481a9a418ba81f9f6e3d4451cc42057cc12fc58d6b889da8: Status 404 returned error can't find the container with id 93855cfad0762066481a9a418ba81f9f6e3d4451cc42057cc12fc58d6b889da8 Jan 26 17:56:27 crc kubenswrapper[4787]: I0126 17:56:27.839288 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5"] Jan 26 17:56:27 crc kubenswrapper[4787]: W0126 17:56:27.845473 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6d7ea7c_96d8_4ed5_a2e3_b5e8012732c9.slice/crio-18a8b34cd3c4d9d83c1458343cb61f396427142e21366aa452dd9869cf32880d WatchSource:0}: Error finding container 18a8b34cd3c4d9d83c1458343cb61f396427142e21366aa452dd9869cf32880d: Status 404 returned error can't find the container with id 18a8b34cd3c4d9d83c1458343cb61f396427142e21366aa452dd9869cf32880d Jan 26 17:56:28 crc kubenswrapper[4787]: I0126 17:56:28.709469 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55c858d47f-w9mrm" event={"ID":"e047ded0-9151-4083-bcac-3dbba5ca2eeb","Type":"ContainerStarted","Data":"2c11185577f986388563960e72031ee2699900175913c6b4e67c533dace6dbf7"} Jan 26 17:56:28 crc kubenswrapper[4787]: I0126 17:56:28.713858 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" event={"ID":"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a","Type":"ContainerStarted","Data":"93855cfad0762066481a9a418ba81f9f6e3d4451cc42057cc12fc58d6b889da8"} Jan 26 17:56:28 crc kubenswrapper[4787]: I0126 17:56:28.717081 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" event={"ID":"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9","Type":"ContainerStarted","Data":"18a8b34cd3c4d9d83c1458343cb61f396427142e21366aa452dd9869cf32880d"} Jan 26 17:56:28 crc kubenswrapper[4787]: I0126 17:56:28.734363 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55c858d47f-w9mrm" podStartSLOduration=1.7343454459999998 podStartE2EDuration="1.734345446s" podCreationTimestamp="2026-01-26 17:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:56:28.731055423 +0000 UTC m=+757.438191576" watchObservedRunningTime="2026-01-26 17:56:28.734345446 +0000 UTC m=+757.441481599" Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.738327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tjd5k" event={"ID":"f611b8d8-b794-4b15-bb02-25776ca06b96","Type":"ContainerStarted","Data":"21b052d6a067a6e1cef2445ea3743c7fb501d3cd44b64dd3dd90fa89af692b3b"} Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.738924 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.744382 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" event={"ID":"e7890c8e-6fb8-42b4-a953-d5dfac5ed67a","Type":"ContainerStarted","Data":"12d4540b99d27ea42a8dc83f141147b911b9ad4dfc80c4120613fd682a8e16df"} Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.750928 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" event={"ID":"d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9","Type":"ContainerStarted","Data":"5c97726dc7868faae9e82473de6315d2f34411b5b2e66c1f44312fa4f7825f5a"} Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.751328 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.756688 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" event={"ID":"e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9","Type":"ContainerStarted","Data":"ff4e1cedac13325223d8e090b2306ff2606e8c7078e54a2c0670300fd6145c81"} Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.768842 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tjd5k" podStartSLOduration=1.604172559 podStartE2EDuration="4.76881387s" podCreationTimestamp="2026-01-26 17:56:26 +0000 UTC" firstStartedPulling="2026-01-26 17:56:27.133300881 +0000 UTC m=+755.840437014" lastFinishedPulling="2026-01-26 17:56:30.297942192 +0000 UTC m=+759.005078325" observedRunningTime="2026-01-26 17:56:30.763497675 +0000 UTC m=+759.470633808" watchObservedRunningTime="2026-01-26 17:56:30.76881387 +0000 UTC m=+759.475950033" Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.789269 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fcdck" podStartSLOduration=2.209299123 podStartE2EDuration="4.789246179s" podCreationTimestamp="2026-01-26 17:56:26 +0000 UTC" firstStartedPulling="2026-01-26 17:56:27.719600357 +0000 UTC m=+756.426736490" lastFinishedPulling="2026-01-26 17:56:30.299547413 +0000 UTC m=+759.006683546" observedRunningTime="2026-01-26 17:56:30.783290838 +0000 UTC m=+759.490426981" watchObservedRunningTime="2026-01-26 17:56:30.789246179 +0000 UTC m=+759.496382322" Jan 26 17:56:30 crc kubenswrapper[4787]: I0126 17:56:30.815161 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" podStartSLOduration=2.364551982 podStartE2EDuration="4.815133625s" podCreationTimestamp="2026-01-26 17:56:26 +0000 UTC" firstStartedPulling="2026-01-26 17:56:27.847481032 +0000 UTC m=+756.554617165" lastFinishedPulling="2026-01-26 17:56:30.298062675 +0000 UTC m=+759.005198808" observedRunningTime="2026-01-26 17:56:30.807560783 +0000 UTC m=+759.514696936" watchObservedRunningTime="2026-01-26 17:56:30.815133625 +0000 UTC m=+759.522269758" Jan 26 17:56:33 crc kubenswrapper[4787]: I0126 17:56:33.660274 4787 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 17:56:33 crc kubenswrapper[4787]: I0126 17:56:33.775378 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" event={"ID":"e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9","Type":"ContainerStarted","Data":"08ee0d5958c47b7f3007cba98e4834a70b323a9c1057f63a1cdf8b60338689ec"} Jan 26 17:56:33 crc kubenswrapper[4787]: I0126 17:56:33.803504 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwqxq" podStartSLOduration=2.343229433 podStartE2EDuration="7.803484794s" podCreationTimestamp="2026-01-26 17:56:26 +0000 UTC" firstStartedPulling="2026-01-26 17:56:27.25505749 +0000 UTC m=+755.962193623" lastFinishedPulling="2026-01-26 17:56:32.715312851 +0000 UTC m=+761.422448984" observedRunningTime="2026-01-26 17:56:33.799864472 +0000 UTC m=+762.507000625" watchObservedRunningTime="2026-01-26 17:56:33.803484794 +0000 UTC m=+762.510620927" Jan 26 17:56:37 crc kubenswrapper[4787]: I0126 17:56:37.119762 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tjd5k" Jan 26 17:56:37 crc kubenswrapper[4787]: I0126 17:56:37.453562 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:37 crc kubenswrapper[4787]: I0126 17:56:37.453630 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:37 crc kubenswrapper[4787]: I0126 17:56:37.460773 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:37 crc kubenswrapper[4787]: I0126 17:56:37.800298 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55c858d47f-w9mrm" Jan 26 17:56:37 crc kubenswrapper[4787]: I0126 17:56:37.849805 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gndbr"] Jan 26 17:56:46 crc kubenswrapper[4787]: I0126 17:56:46.808412 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:56:46 crc kubenswrapper[4787]: I0126 17:56:46.808739 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:56:47 crc kubenswrapper[4787]: I0126 17:56:47.677886 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pk9n5" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.668541 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg"] Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.671530 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.677275 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.694430 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg"] Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.837762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpk2\" (UniqueName: \"kubernetes.io/projected/fd6452e2-f24e-41fa-9979-70cdc9171695-kube-api-access-7dpk2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.838128 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.838259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.939413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpk2\" (UniqueName: \"kubernetes.io/projected/fd6452e2-f24e-41fa-9979-70cdc9171695-kube-api-access-7dpk2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.939470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.939501 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.940162 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.940218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:00 crc kubenswrapper[4787]: I0126 17:57:00.960021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpk2\" (UniqueName: \"kubernetes.io/projected/fd6452e2-f24e-41fa-9979-70cdc9171695-kube-api-access-7dpk2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:01 crc kubenswrapper[4787]: I0126 17:57:01.010156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:01 crc kubenswrapper[4787]: I0126 17:57:01.265150 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg"] Jan 26 17:57:01 crc kubenswrapper[4787]: I0126 17:57:01.938524 4787 generic.go:334] "Generic (PLEG): container finished" podID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerID="7bc77131c9b2a83ec2ee9d3c61dc4bff94d321ede345fadf4f929cfe9a4186fc" exitCode=0 Jan 26 17:57:01 crc kubenswrapper[4787]: I0126 17:57:01.938596 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" event={"ID":"fd6452e2-f24e-41fa-9979-70cdc9171695","Type":"ContainerDied","Data":"7bc77131c9b2a83ec2ee9d3c61dc4bff94d321ede345fadf4f929cfe9a4186fc"} Jan 26 17:57:01 crc kubenswrapper[4787]: I0126 17:57:01.938830 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" event={"ID":"fd6452e2-f24e-41fa-9979-70cdc9171695","Type":"ContainerStarted","Data":"ebf1a9b60b47431811596374a9519ed86af59abaf8cd389ee431a9ca46a736e0"} Jan 26 17:57:02 crc kubenswrapper[4787]: I0126 17:57:02.888799 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gndbr" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerName="console" containerID="cri-o://f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690" gracePeriod=15 Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.453561 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gndbr_11106d86-1e86-47cf-907d-9fb690a4f56e/console/0.log" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.453845 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473096 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-serving-cert\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473194 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-console-config\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473224 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-service-ca\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473261 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-trusted-ca-bundle\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473306 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-oauth-config\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473358 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-oauth-serving-cert\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.473383 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt44s\" (UniqueName: \"kubernetes.io/projected/11106d86-1e86-47cf-907d-9fb690a4f56e-kube-api-access-qt44s\") pod \"11106d86-1e86-47cf-907d-9fb690a4f56e\" (UID: \"11106d86-1e86-47cf-907d-9fb690a4f56e\") " Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.474628 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.474635 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-console-config" (OuterVolumeSpecName: "console-config") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.475022 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.475336 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-service-ca" (OuterVolumeSpecName: "service-ca") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.480525 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.480654 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.480560 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11106d86-1e86-47cf-907d-9fb690a4f56e-kube-api-access-qt44s" (OuterVolumeSpecName: "kube-api-access-qt44s") pod "11106d86-1e86-47cf-907d-9fb690a4f56e" (UID: "11106d86-1e86-47cf-907d-9fb690a4f56e"). InnerVolumeSpecName "kube-api-access-qt44s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.574733 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.575102 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.575117 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.575127 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.575136 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt44s\" (UniqueName: \"kubernetes.io/projected/11106d86-1e86-47cf-907d-9fb690a4f56e-kube-api-access-qt44s\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.575144 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11106d86-1e86-47cf-907d-9fb690a4f56e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.575152 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11106d86-1e86-47cf-907d-9fb690a4f56e-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.961767 4787 generic.go:334] "Generic (PLEG): container finished" podID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerID="a5a19ed55933b19fbdbed2ce2352fed76105c16fb3a69c713af3c8b82e162cf9" exitCode=0 Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.961853 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" event={"ID":"fd6452e2-f24e-41fa-9979-70cdc9171695","Type":"ContainerDied","Data":"a5a19ed55933b19fbdbed2ce2352fed76105c16fb3a69c713af3c8b82e162cf9"} Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.966908 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gndbr_11106d86-1e86-47cf-907d-9fb690a4f56e/console/0.log" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.967033 4787 generic.go:334] "Generic (PLEG): container finished" podID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerID="f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690" exitCode=2 Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.967091 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gndbr" event={"ID":"11106d86-1e86-47cf-907d-9fb690a4f56e","Type":"ContainerDied","Data":"f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690"} Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.967141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gndbr" event={"ID":"11106d86-1e86-47cf-907d-9fb690a4f56e","Type":"ContainerDied","Data":"dacab8e31cfe41128b6e4565c264fda79a25f1522efdfec0b619314c34ba229c"} Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.967177 4787 scope.go:117] "RemoveContainer" containerID="f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690" Jan 26 17:57:03 crc kubenswrapper[4787]: I0126 17:57:03.967321 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gndbr" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.004667 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gndbr"] Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.008276 4787 scope.go:117] "RemoveContainer" containerID="f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.013493 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gndbr"] Jan 26 17:57:04 crc kubenswrapper[4787]: E0126 17:57:04.015562 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690\": container with ID starting with f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690 not found: ID does not exist" containerID="f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.015626 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690"} err="failed to get container status \"f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690\": rpc error: code = NotFound desc = could not find container \"f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690\": container with ID starting with f7a590301084c5e85b8e3ace4176b77d3671430e20afe0ac915fbbfa15b5f690 not found: ID does not exist" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.015813 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mm4dt"] Jan 26 17:57:04 crc kubenswrapper[4787]: E0126 17:57:04.016242 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerName="console" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.016258 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerName="console" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.016368 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" containerName="console" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.020742 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.027495 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm4dt"] Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.082037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-utilities\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.082081 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsbf6\" (UniqueName: \"kubernetes.io/projected/90e98926-253f-438e-9845-3e4f3fb180c7-kube-api-access-dsbf6\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.082170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-catalog-content\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.183682 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-catalog-content\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.183740 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsbf6\" (UniqueName: \"kubernetes.io/projected/90e98926-253f-438e-9845-3e4f3fb180c7-kube-api-access-dsbf6\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.183769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-utilities\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.184236 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-catalog-content\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.184306 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-utilities\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.215561 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsbf6\" (UniqueName: \"kubernetes.io/projected/90e98926-253f-438e-9845-3e4f3fb180c7-kube-api-access-dsbf6\") pod \"redhat-operators-mm4dt\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.402160 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.597627 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm4dt"] Jan 26 17:57:04 crc kubenswrapper[4787]: W0126 17:57:04.606651 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90e98926_253f_438e_9845_3e4f3fb180c7.slice/crio-799b0e541afd28d66437e990a73f2429bbcd30aff13da90ec3f97ab6f1cfa3d9 WatchSource:0}: Error finding container 799b0e541afd28d66437e990a73f2429bbcd30aff13da90ec3f97ab6f1cfa3d9: Status 404 returned error can't find the container with id 799b0e541afd28d66437e990a73f2429bbcd30aff13da90ec3f97ab6f1cfa3d9 Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.973616 4787 generic.go:334] "Generic (PLEG): container finished" podID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerID="5e1bc2fe801006a1db6708123d6c1a588677c0128a5a5fb822cddb5cce24faca" exitCode=0 Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.973691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" event={"ID":"fd6452e2-f24e-41fa-9979-70cdc9171695","Type":"ContainerDied","Data":"5e1bc2fe801006a1db6708123d6c1a588677c0128a5a5fb822cddb5cce24faca"} Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.977078 4787 generic.go:334] "Generic (PLEG): container finished" podID="90e98926-253f-438e-9845-3e4f3fb180c7" containerID="ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596" exitCode=0 Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.977290 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerDied","Data":"ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596"} Jan 26 17:57:04 crc kubenswrapper[4787]: I0126 17:57:04.977666 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerStarted","Data":"799b0e541afd28d66437e990a73f2429bbcd30aff13da90ec3f97ab6f1cfa3d9"} Jan 26 17:57:05 crc kubenswrapper[4787]: I0126 17:57:05.601325 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11106d86-1e86-47cf-907d-9fb690a4f56e" path="/var/lib/kubelet/pods/11106d86-1e86-47cf-907d-9fb690a4f56e/volumes" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.271982 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.313393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-bundle\") pod \"fd6452e2-f24e-41fa-9979-70cdc9171695\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.313486 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-util\") pod \"fd6452e2-f24e-41fa-9979-70cdc9171695\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.313538 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpk2\" (UniqueName: \"kubernetes.io/projected/fd6452e2-f24e-41fa-9979-70cdc9171695-kube-api-access-7dpk2\") pod \"fd6452e2-f24e-41fa-9979-70cdc9171695\" (UID: \"fd6452e2-f24e-41fa-9979-70cdc9171695\") " Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.316061 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-bundle" (OuterVolumeSpecName: "bundle") pod "fd6452e2-f24e-41fa-9979-70cdc9171695" (UID: "fd6452e2-f24e-41fa-9979-70cdc9171695"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.319504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6452e2-f24e-41fa-9979-70cdc9171695-kube-api-access-7dpk2" (OuterVolumeSpecName: "kube-api-access-7dpk2") pod "fd6452e2-f24e-41fa-9979-70cdc9171695" (UID: "fd6452e2-f24e-41fa-9979-70cdc9171695"). InnerVolumeSpecName "kube-api-access-7dpk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.329491 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-util" (OuterVolumeSpecName: "util") pod "fd6452e2-f24e-41fa-9979-70cdc9171695" (UID: "fd6452e2-f24e-41fa-9979-70cdc9171695"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.415232 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.415290 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6452e2-f24e-41fa-9979-70cdc9171695-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.415308 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpk2\" (UniqueName: \"kubernetes.io/projected/fd6452e2-f24e-41fa-9979-70cdc9171695-kube-api-access-7dpk2\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.994464 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.994447 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg" event={"ID":"fd6452e2-f24e-41fa-9979-70cdc9171695","Type":"ContainerDied","Data":"ebf1a9b60b47431811596374a9519ed86af59abaf8cd389ee431a9ca46a736e0"} Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.994618 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf1a9b60b47431811596374a9519ed86af59abaf8cd389ee431a9ca46a736e0" Jan 26 17:57:06 crc kubenswrapper[4787]: I0126 17:57:06.996232 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerStarted","Data":"9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1"} Jan 26 17:57:08 crc kubenswrapper[4787]: I0126 17:57:08.003856 4787 generic.go:334] "Generic (PLEG): container finished" podID="90e98926-253f-438e-9845-3e4f3fb180c7" containerID="9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1" exitCode=0 Jan 26 17:57:08 crc kubenswrapper[4787]: I0126 17:57:08.003925 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerDied","Data":"9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1"} Jan 26 17:57:09 crc kubenswrapper[4787]: I0126 17:57:09.012049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerStarted","Data":"fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815"} Jan 26 17:57:09 crc kubenswrapper[4787]: I0126 17:57:09.054085 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mm4dt" podStartSLOduration=2.212321999 podStartE2EDuration="6.054062751s" podCreationTimestamp="2026-01-26 17:57:03 +0000 UTC" firstStartedPulling="2026-01-26 17:57:04.978771235 +0000 UTC m=+793.685907368" lastFinishedPulling="2026-01-26 17:57:08.820511987 +0000 UTC m=+797.527648120" observedRunningTime="2026-01-26 17:57:09.051117198 +0000 UTC m=+797.758253341" watchObservedRunningTime="2026-01-26 17:57:09.054062751 +0000 UTC m=+797.761198884" Jan 26 17:57:14 crc kubenswrapper[4787]: I0126 17:57:14.402875 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:14 crc kubenswrapper[4787]: I0126 17:57:14.403549 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:15 crc kubenswrapper[4787]: I0126 17:57:15.462309 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm4dt" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="registry-server" probeResult="failure" output=< Jan 26 17:57:15 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 17:57:15 crc kubenswrapper[4787]: > Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.714471 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg"] Jan 26 17:57:16 crc kubenswrapper[4787]: E0126 17:57:16.714979 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="extract" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.714995 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="extract" Jan 26 17:57:16 crc kubenswrapper[4787]: E0126 17:57:16.715029 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="pull" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.715037 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="pull" Jan 26 17:57:16 crc kubenswrapper[4787]: E0126 17:57:16.715052 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="util" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.715061 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="util" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.715345 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6452e2-f24e-41fa-9979-70cdc9171695" containerName="extract" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.716130 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.719412 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-r4vhh" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.719572 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.720603 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.725294 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.735833 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg"] Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.735960 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.771317 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60908790-8a50-4773-b481-e1fadc716242-apiservice-cert\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.771395 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgc5w\" (UniqueName: \"kubernetes.io/projected/60908790-8a50-4773-b481-e1fadc716242-kube-api-access-vgc5w\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.771429 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60908790-8a50-4773-b481-e1fadc716242-webhook-cert\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.807799 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.807862 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.872065 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60908790-8a50-4773-b481-e1fadc716242-webhook-cert\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.872131 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60908790-8a50-4773-b481-e1fadc716242-apiservice-cert\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.872377 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgc5w\" (UniqueName: \"kubernetes.io/projected/60908790-8a50-4773-b481-e1fadc716242-kube-api-access-vgc5w\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.879514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60908790-8a50-4773-b481-e1fadc716242-apiservice-cert\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.879825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60908790-8a50-4773-b481-e1fadc716242-webhook-cert\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.892822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgc5w\" (UniqueName: \"kubernetes.io/projected/60908790-8a50-4773-b481-e1fadc716242-kube-api-access-vgc5w\") pod \"metallb-operator-controller-manager-7cf95b7cc-gwpxg\" (UID: \"60908790-8a50-4773-b481-e1fadc716242\") " pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.946384 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk"] Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.947277 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.952322 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.952710 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.953126 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2z4p2" Jan 26 17:57:16 crc kubenswrapper[4787]: I0126 17:57:16.960819 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk"] Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.041314 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.078071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1673c754-97a5-4dab-b604-2fed469cddb3-apiservice-cert\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.078140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1673c754-97a5-4dab-b604-2fed469cddb3-webhook-cert\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.078179 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfbq\" (UniqueName: \"kubernetes.io/projected/1673c754-97a5-4dab-b604-2fed469cddb3-kube-api-access-sqfbq\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.179481 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1673c754-97a5-4dab-b604-2fed469cddb3-webhook-cert\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.179794 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfbq\" (UniqueName: \"kubernetes.io/projected/1673c754-97a5-4dab-b604-2fed469cddb3-kube-api-access-sqfbq\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.179872 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1673c754-97a5-4dab-b604-2fed469cddb3-apiservice-cert\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.187875 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1673c754-97a5-4dab-b604-2fed469cddb3-webhook-cert\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.208601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1673c754-97a5-4dab-b604-2fed469cddb3-apiservice-cert\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.226854 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfbq\" (UniqueName: \"kubernetes.io/projected/1673c754-97a5-4dab-b604-2fed469cddb3-kube-api-access-sqfbq\") pod \"metallb-operator-webhook-server-57647b8b8d-x54bk\" (UID: \"1673c754-97a5-4dab-b604-2fed469cddb3\") " pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.263567 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.341578 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg"] Jan 26 17:57:17 crc kubenswrapper[4787]: W0126 17:57:17.346361 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60908790_8a50_4773_b481_e1fadc716242.slice/crio-2ee921b54d3e2b1b8d40f606128c9079db2b87f305097afd1b54ffa59e9b6a1e WatchSource:0}: Error finding container 2ee921b54d3e2b1b8d40f606128c9079db2b87f305097afd1b54ffa59e9b6a1e: Status 404 returned error can't find the container with id 2ee921b54d3e2b1b8d40f606128c9079db2b87f305097afd1b54ffa59e9b6a1e Jan 26 17:57:17 crc kubenswrapper[4787]: I0126 17:57:17.730879 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk"] Jan 26 17:57:18 crc kubenswrapper[4787]: I0126 17:57:18.054836 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" event={"ID":"1673c754-97a5-4dab-b604-2fed469cddb3","Type":"ContainerStarted","Data":"111b2e4131346b5696c63fba21602bc8b551fed8e1151b0ebcf392335ee8b07e"} Jan 26 17:57:18 crc kubenswrapper[4787]: I0126 17:57:18.055899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" event={"ID":"60908790-8a50-4773-b481-e1fadc716242","Type":"ContainerStarted","Data":"2ee921b54d3e2b1b8d40f606128c9079db2b87f305097afd1b54ffa59e9b6a1e"} Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.100582 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" event={"ID":"60908790-8a50-4773-b481-e1fadc716242","Type":"ContainerStarted","Data":"be4426ba80094047a00df23f19f388dc0e7604f11702574076817db217900148"} Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.101084 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.101858 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" event={"ID":"1673c754-97a5-4dab-b604-2fed469cddb3","Type":"ContainerStarted","Data":"6bfffef3c1185795e716fd4f691a6ecf329749f40954ee0a990e5c14eab7d944"} Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.102046 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.127613 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" podStartSLOduration=1.681165107 podStartE2EDuration="8.12758699s" podCreationTimestamp="2026-01-26 17:57:16 +0000 UTC" firstStartedPulling="2026-01-26 17:57:17.348921202 +0000 UTC m=+806.056057335" lastFinishedPulling="2026-01-26 17:57:23.795343085 +0000 UTC m=+812.502479218" observedRunningTime="2026-01-26 17:57:24.122607082 +0000 UTC m=+812.829743215" watchObservedRunningTime="2026-01-26 17:57:24.12758699 +0000 UTC m=+812.834723123" Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.143980 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" podStartSLOduration=2.089751343 podStartE2EDuration="8.143960494s" podCreationTimestamp="2026-01-26 17:57:16 +0000 UTC" firstStartedPulling="2026-01-26 17:57:17.750151098 +0000 UTC m=+806.457287231" lastFinishedPulling="2026-01-26 17:57:23.804360249 +0000 UTC m=+812.511496382" observedRunningTime="2026-01-26 17:57:24.142722577 +0000 UTC m=+812.849858710" watchObservedRunningTime="2026-01-26 17:57:24.143960494 +0000 UTC m=+812.851096627" Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.473741 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.582693 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:24 crc kubenswrapper[4787]: I0126 17:57:24.715914 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm4dt"] Jan 26 17:57:26 crc kubenswrapper[4787]: I0126 17:57:26.112182 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mm4dt" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="registry-server" containerID="cri-o://fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815" gracePeriod=2 Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.603915 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.749489 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-catalog-content\") pod \"90e98926-253f-438e-9845-3e4f3fb180c7\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.749647 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-utilities\") pod \"90e98926-253f-438e-9845-3e4f3fb180c7\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.749704 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsbf6\" (UniqueName: \"kubernetes.io/projected/90e98926-253f-438e-9845-3e4f3fb180c7-kube-api-access-dsbf6\") pod \"90e98926-253f-438e-9845-3e4f3fb180c7\" (UID: \"90e98926-253f-438e-9845-3e4f3fb180c7\") " Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.750449 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-utilities" (OuterVolumeSpecName: "utilities") pod "90e98926-253f-438e-9845-3e4f3fb180c7" (UID: "90e98926-253f-438e-9845-3e4f3fb180c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.765306 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e98926-253f-438e-9845-3e4f3fb180c7-kube-api-access-dsbf6" (OuterVolumeSpecName: "kube-api-access-dsbf6") pod "90e98926-253f-438e-9845-3e4f3fb180c7" (UID: "90e98926-253f-438e-9845-3e4f3fb180c7"). InnerVolumeSpecName "kube-api-access-dsbf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.851197 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsbf6\" (UniqueName: \"kubernetes.io/projected/90e98926-253f-438e-9845-3e4f3fb180c7-kube-api-access-dsbf6\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.851242 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.888666 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90e98926-253f-438e-9845-3e4f3fb180c7" (UID: "90e98926-253f-438e-9845-3e4f3fb180c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:57:27 crc kubenswrapper[4787]: I0126 17:57:27.952557 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e98926-253f-438e-9845-3e4f3fb180c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.124207 4787 generic.go:334] "Generic (PLEG): container finished" podID="90e98926-253f-438e-9845-3e4f3fb180c7" containerID="fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815" exitCode=0 Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.124253 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerDied","Data":"fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815"} Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.124268 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4dt" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.124279 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4dt" event={"ID":"90e98926-253f-438e-9845-3e4f3fb180c7","Type":"ContainerDied","Data":"799b0e541afd28d66437e990a73f2429bbcd30aff13da90ec3f97ab6f1cfa3d9"} Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.124308 4787 scope.go:117] "RemoveContainer" containerID="fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.149322 4787 scope.go:117] "RemoveContainer" containerID="9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.160441 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm4dt"] Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.165313 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mm4dt"] Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.180756 4787 scope.go:117] "RemoveContainer" containerID="ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.205434 4787 scope.go:117] "RemoveContainer" containerID="fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815" Jan 26 17:57:28 crc kubenswrapper[4787]: E0126 17:57:28.205919 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815\": container with ID starting with fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815 not found: ID does not exist" containerID="fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.205986 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815"} err="failed to get container status \"fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815\": rpc error: code = NotFound desc = could not find container \"fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815\": container with ID starting with fa2c663a74a7680933e7f1139cfa2bfc2657c5c5bded49b361be1eeb78c4e815 not found: ID does not exist" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.206018 4787 scope.go:117] "RemoveContainer" containerID="9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1" Jan 26 17:57:28 crc kubenswrapper[4787]: E0126 17:57:28.207450 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1\": container with ID starting with 9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1 not found: ID does not exist" containerID="9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.207489 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1"} err="failed to get container status \"9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1\": rpc error: code = NotFound desc = could not find container \"9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1\": container with ID starting with 9c260932cdd63ce93572826491d2d2f9253e31166141018136ceeba70d3f07b1 not found: ID does not exist" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.207515 4787 scope.go:117] "RemoveContainer" containerID="ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596" Jan 26 17:57:28 crc kubenswrapper[4787]: E0126 17:57:28.210264 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596\": container with ID starting with ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596 not found: ID does not exist" containerID="ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596" Jan 26 17:57:28 crc kubenswrapper[4787]: I0126 17:57:28.210321 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596"} err="failed to get container status \"ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596\": rpc error: code = NotFound desc = could not find container \"ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596\": container with ID starting with ed73f4a61d349a4daf8489b0bece56b210be3cb2bf1acb76aadb09972c0f6596 not found: ID does not exist" Jan 26 17:57:29 crc kubenswrapper[4787]: I0126 17:57:29.596629 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" path="/var/lib/kubelet/pods/90e98926-253f-438e-9845-3e4f3fb180c7/volumes" Jan 26 17:57:37 crc kubenswrapper[4787]: I0126 17:57:37.268345 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-57647b8b8d-x54bk" Jan 26 17:57:46 crc kubenswrapper[4787]: I0126 17:57:46.808008 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:57:46 crc kubenswrapper[4787]: I0126 17:57:46.808442 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:57:46 crc kubenswrapper[4787]: I0126 17:57:46.808486 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 17:57:46 crc kubenswrapper[4787]: I0126 17:57:46.809099 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db3aa5f8ecb00426491aefbe94533ea044737555893f85a79b932f0e6fb23390"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:57:46 crc kubenswrapper[4787]: I0126 17:57:46.809181 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://db3aa5f8ecb00426491aefbe94533ea044737555893f85a79b932f0e6fb23390" gracePeriod=600 Jan 26 17:57:47 crc kubenswrapper[4787]: I0126 17:57:47.241221 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="db3aa5f8ecb00426491aefbe94533ea044737555893f85a79b932f0e6fb23390" exitCode=0 Jan 26 17:57:47 crc kubenswrapper[4787]: I0126 17:57:47.241272 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"db3aa5f8ecb00426491aefbe94533ea044737555893f85a79b932f0e6fb23390"} Jan 26 17:57:47 crc kubenswrapper[4787]: I0126 17:57:47.241310 4787 scope.go:117] "RemoveContainer" containerID="f650ee61520498599c3b5c68c12d1d32bfaabde165ca884cb5a77f30320f32f1" Jan 26 17:57:48 crc kubenswrapper[4787]: I0126 17:57:48.248537 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"72456f5ee53807e4dd44f118cd885938e3fabd30b979df1c99e1042ba20d5aff"} Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.043530 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7cf95b7cc-gwpxg" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.674486 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9"] Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.674964 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="extract-content" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.674981 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="extract-content" Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.674994 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="extract-utilities" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.675002 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="extract-utilities" Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.675022 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="registry-server" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.675029 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="registry-server" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.675133 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e98926-253f-438e-9845-3e4f3fb180c7" containerName="registry-server" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.675480 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.677965 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vqx4j" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.678065 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.694508 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9"] Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.702499 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lqwvw"] Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.704878 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.708272 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.708289 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.747187 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d161ffa-23ea-4543-a477-9481257193fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.747482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlx9r\" (UniqueName: \"kubernetes.io/projected/0d161ffa-23ea-4543-a477-9481257193fc-kube-api-access-xlx9r\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.762600 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-f78r2"] Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.763790 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.769312 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.769343 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.769361 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hbhf5" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.769361 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.800235 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6jqpw"] Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.801393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.803337 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.807668 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6jqpw"] Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854337 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854378 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854406 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-startup\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854433 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d161ffa-23ea-4543-a477-9481257193fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-sockets\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854466 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-conf\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlx9r\" (UniqueName: \"kubernetes.io/projected/0d161ffa-23ea-4543-a477-9481257193fc-kube-api-access-xlx9r\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854541 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhj4b\" (UniqueName: \"kubernetes.io/projected/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-kube-api-access-mhj4b\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854560 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-metrics-certs\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854579 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767bq\" (UniqueName: \"kubernetes.io/projected/c5a525de-c458-4bd2-95cb-a514b2ade84f-kube-api-access-767bq\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854595 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-reloader\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854609 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics-certs\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.854638 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c5a525de-c458-4bd2-95cb-a514b2ade84f-metallb-excludel2\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.854655 4787 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.854724 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d161ffa-23ea-4543-a477-9481257193fc-cert podName:0d161ffa-23ea-4543-a477-9481257193fc nodeName:}" failed. No retries permitted until 2026-01-26 17:57:58.354704565 +0000 UTC m=+847.061840698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d161ffa-23ea-4543-a477-9481257193fc-cert") pod "frr-k8s-webhook-server-7df86c4f6c-sdpp9" (UID: "0d161ffa-23ea-4543-a477-9481257193fc") : secret "frr-k8s-webhook-server-cert" not found Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.872130 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlx9r\" (UniqueName: \"kubernetes.io/projected/0d161ffa-23ea-4543-a477-9481257193fc-kube-api-access-xlx9r\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.955877 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhj4b\" (UniqueName: \"kubernetes.io/projected/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-kube-api-access-mhj4b\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956202 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/456244df-ce3a-476e-b68a-2c0d37f24aa5-metrics-certs\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956285 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-metrics-certs\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767bq\" (UniqueName: \"kubernetes.io/projected/c5a525de-c458-4bd2-95cb-a514b2ade84f-kube-api-access-767bq\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-reloader\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics-certs\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956599 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c5a525de-c458-4bd2-95cb-a514b2ade84f-metallb-excludel2\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.956662 4787 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.956748 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics-certs podName:02143b0c-6bfa-4ba8-bc62-ba62eb8768cd nodeName:}" failed. No retries permitted until 2026-01-26 17:57:58.456725449 +0000 UTC m=+847.163861662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics-certs") pod "frr-k8s-lqwvw" (UID: "02143b0c-6bfa-4ba8-bc62-ba62eb8768cd") : secret "frr-k8s-certs-secret" not found Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956685 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ldv\" (UniqueName: \"kubernetes.io/projected/456244df-ce3a-476e-b68a-2c0d37f24aa5-kube-api-access-54ldv\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956883 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956930 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-startup\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956985 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/456244df-ce3a-476e-b68a-2c0d37f24aa5-cert\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.957022 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-sockets\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.957039 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-conf\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.957168 4787 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 17:57:57 crc kubenswrapper[4787]: E0126 17:57:57.957264 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist podName:c5a525de-c458-4bd2-95cb-a514b2ade84f nodeName:}" failed. No retries permitted until 2026-01-26 17:57:58.45724924 +0000 UTC m=+847.164385373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist") pod "speaker-f78r2" (UID: "c5a525de-c458-4bd2-95cb-a514b2ade84f") : secret "metallb-memberlist" not found Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.957275 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.956853 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-reloader\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.957413 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c5a525de-c458-4bd2-95cb-a514b2ade84f-metallb-excludel2\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.957581 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-conf\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.957770 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-sockets\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.958233 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-frr-startup\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.960123 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-metrics-certs\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.974504 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767bq\" (UniqueName: \"kubernetes.io/projected/c5a525de-c458-4bd2-95cb-a514b2ade84f-kube-api-access-767bq\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:57 crc kubenswrapper[4787]: I0126 17:57:57.981532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhj4b\" (UniqueName: \"kubernetes.io/projected/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-kube-api-access-mhj4b\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.058397 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/456244df-ce3a-476e-b68a-2c0d37f24aa5-cert\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.058476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/456244df-ce3a-476e-b68a-2c0d37f24aa5-metrics-certs\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.058523 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ldv\" (UniqueName: \"kubernetes.io/projected/456244df-ce3a-476e-b68a-2c0d37f24aa5-kube-api-access-54ldv\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.061507 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/456244df-ce3a-476e-b68a-2c0d37f24aa5-metrics-certs\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.061921 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.072828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/456244df-ce3a-476e-b68a-2c0d37f24aa5-cert\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.075300 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ldv\" (UniqueName: \"kubernetes.io/projected/456244df-ce3a-476e-b68a-2c0d37f24aa5-kube-api-access-54ldv\") pod \"controller-6968d8fdc4-6jqpw\" (UID: \"456244df-ce3a-476e-b68a-2c0d37f24aa5\") " pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.127140 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.361789 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d161ffa-23ea-4543-a477-9481257193fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.366806 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d161ffa-23ea-4543-a477-9481257193fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sdpp9\" (UID: \"0d161ffa-23ea-4543-a477-9481257193fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.462870 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics-certs\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.462939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:58 crc kubenswrapper[4787]: E0126 17:57:58.463104 4787 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 17:57:58 crc kubenswrapper[4787]: E0126 17:57:58.463206 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist podName:c5a525de-c458-4bd2-95cb-a514b2ade84f nodeName:}" failed. No retries permitted until 2026-01-26 17:57:59.463186197 +0000 UTC m=+848.170322330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist") pod "speaker-f78r2" (UID: "c5a525de-c458-4bd2-95cb-a514b2ade84f") : secret "metallb-memberlist" not found Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.465980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02143b0c-6bfa-4ba8-bc62-ba62eb8768cd-metrics-certs\") pod \"frr-k8s-lqwvw\" (UID: \"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd\") " pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.510022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6jqpw"] Jan 26 17:57:58 crc kubenswrapper[4787]: W0126 17:57:58.514313 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456244df_ce3a_476e_b68a_2c0d37f24aa5.slice/crio-f733fa2f22aaf971ce0be6536839b734b507a4d7de01ab0de3e64764554ea5b3 WatchSource:0}: Error finding container f733fa2f22aaf971ce0be6536839b734b507a4d7de01ab0de3e64764554ea5b3: Status 404 returned error can't find the container with id f733fa2f22aaf971ce0be6536839b734b507a4d7de01ab0de3e64764554ea5b3 Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.594076 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.622420 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:57:58 crc kubenswrapper[4787]: I0126 17:57:58.791526 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9"] Jan 26 17:57:58 crc kubenswrapper[4787]: W0126 17:57:58.798142 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d161ffa_23ea_4543_a477_9481257193fc.slice/crio-e447d22ac67370d68e7d22407b295defd7e254337e45b6d26fbcb204707f9b0b WatchSource:0}: Error finding container e447d22ac67370d68e7d22407b295defd7e254337e45b6d26fbcb204707f9b0b: Status 404 returned error can't find the container with id e447d22ac67370d68e7d22407b295defd7e254337e45b6d26fbcb204707f9b0b Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.321769 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"e6cb3fd15bc93607babfa657c2ca8373d7eca48b57353167417b1a388910a203"} Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.323289 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" event={"ID":"0d161ffa-23ea-4543-a477-9481257193fc","Type":"ContainerStarted","Data":"e447d22ac67370d68e7d22407b295defd7e254337e45b6d26fbcb204707f9b0b"} Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.325181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6jqpw" event={"ID":"456244df-ce3a-476e-b68a-2c0d37f24aa5","Type":"ContainerStarted","Data":"79122668063d67c874d7abb4abe0562a1473ae398b4ec25569a962a125ff703b"} Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.325220 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6jqpw" event={"ID":"456244df-ce3a-476e-b68a-2c0d37f24aa5","Type":"ContainerStarted","Data":"e980a4a2add63c364c35667fe41d9712c164c2df051a606f9d345f9351ac7c34"} Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.325239 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6jqpw" event={"ID":"456244df-ce3a-476e-b68a-2c0d37f24aa5","Type":"ContainerStarted","Data":"f733fa2f22aaf971ce0be6536839b734b507a4d7de01ab0de3e64764554ea5b3"} Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.325343 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.348622 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6jqpw" podStartSLOduration=2.348602439 podStartE2EDuration="2.348602439s" podCreationTimestamp="2026-01-26 17:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:57:59.345803738 +0000 UTC m=+848.052939881" watchObservedRunningTime="2026-01-26 17:57:59.348602439 +0000 UTC m=+848.055738582" Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.476684 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.486922 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5a525de-c458-4bd2-95cb-a514b2ade84f-memberlist\") pod \"speaker-f78r2\" (UID: \"c5a525de-c458-4bd2-95cb-a514b2ade84f\") " pod="metallb-system/speaker-f78r2" Jan 26 17:57:59 crc kubenswrapper[4787]: I0126 17:57:59.583465 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-f78r2" Jan 26 17:57:59 crc kubenswrapper[4787]: W0126 17:57:59.614924 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a525de_c458_4bd2_95cb_a514b2ade84f.slice/crio-0c0595c4c03bc04cfb08bcc268f4424622e03214fa6d290c4159ad7029fabce8 WatchSource:0}: Error finding container 0c0595c4c03bc04cfb08bcc268f4424622e03214fa6d290c4159ad7029fabce8: Status 404 returned error can't find the container with id 0c0595c4c03bc04cfb08bcc268f4424622e03214fa6d290c4159ad7029fabce8 Jan 26 17:58:00 crc kubenswrapper[4787]: I0126 17:58:00.338595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f78r2" event={"ID":"c5a525de-c458-4bd2-95cb-a514b2ade84f","Type":"ContainerStarted","Data":"9dd04608b8fc790cee35aab9e0799a6896326fa01318a0b56e95d7772687a929"} Jan 26 17:58:00 crc kubenswrapper[4787]: I0126 17:58:00.338647 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f78r2" event={"ID":"c5a525de-c458-4bd2-95cb-a514b2ade84f","Type":"ContainerStarted","Data":"e6087e73ee33f93476760776d9816a4fc53ed6c4b27fbfbe87ff6b08018096a8"} Jan 26 17:58:00 crc kubenswrapper[4787]: I0126 17:58:00.338666 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f78r2" event={"ID":"c5a525de-c458-4bd2-95cb-a514b2ade84f","Type":"ContainerStarted","Data":"0c0595c4c03bc04cfb08bcc268f4424622e03214fa6d290c4159ad7029fabce8"} Jan 26 17:58:00 crc kubenswrapper[4787]: I0126 17:58:00.338867 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-f78r2" Jan 26 17:58:00 crc kubenswrapper[4787]: I0126 17:58:00.370593 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-f78r2" podStartSLOduration=3.3705773900000002 podStartE2EDuration="3.37057739s" podCreationTimestamp="2026-01-26 17:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:58:00.366370239 +0000 UTC m=+849.073506372" watchObservedRunningTime="2026-01-26 17:58:00.37057739 +0000 UTC m=+849.077713513" Jan 26 17:58:06 crc kubenswrapper[4787]: I0126 17:58:06.373656 4787 generic.go:334] "Generic (PLEG): container finished" podID="02143b0c-6bfa-4ba8-bc62-ba62eb8768cd" containerID="7d6b85fa130cbac0a769b9b7d9881b144296cb620ab8c35031c5af000ac91153" exitCode=0 Jan 26 17:58:06 crc kubenswrapper[4787]: I0126 17:58:06.373816 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerDied","Data":"7d6b85fa130cbac0a769b9b7d9881b144296cb620ab8c35031c5af000ac91153"} Jan 26 17:58:06 crc kubenswrapper[4787]: I0126 17:58:06.377208 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" event={"ID":"0d161ffa-23ea-4543-a477-9481257193fc","Type":"ContainerStarted","Data":"76deefef3d327cd3b86d0ba7be2b3a222532a55e6f127f7762ac6f93bf0b4304"} Jan 26 17:58:06 crc kubenswrapper[4787]: I0126 17:58:06.377506 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:58:06 crc kubenswrapper[4787]: I0126 17:58:06.427209 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" podStartSLOduration=2.297543547 podStartE2EDuration="9.427190573s" podCreationTimestamp="2026-01-26 17:57:57 +0000 UTC" firstStartedPulling="2026-01-26 17:57:58.800737907 +0000 UTC m=+847.507874040" lastFinishedPulling="2026-01-26 17:58:05.930384933 +0000 UTC m=+854.637521066" observedRunningTime="2026-01-26 17:58:06.425001575 +0000 UTC m=+855.132137708" watchObservedRunningTime="2026-01-26 17:58:06.427190573 +0000 UTC m=+855.134326706" Jan 26 17:58:07 crc kubenswrapper[4787]: I0126 17:58:07.386486 4787 generic.go:334] "Generic (PLEG): container finished" podID="02143b0c-6bfa-4ba8-bc62-ba62eb8768cd" containerID="3e41fd7f370ea3921b81e8a4b90dde8bba81b1ff72cbdb0bf65c10c535b626be" exitCode=0 Jan 26 17:58:07 crc kubenswrapper[4787]: I0126 17:58:07.386524 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerDied","Data":"3e41fd7f370ea3921b81e8a4b90dde8bba81b1ff72cbdb0bf65c10c535b626be"} Jan 26 17:58:08 crc kubenswrapper[4787]: I0126 17:58:08.131409 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6jqpw" Jan 26 17:58:08 crc kubenswrapper[4787]: I0126 17:58:08.394148 4787 generic.go:334] "Generic (PLEG): container finished" podID="02143b0c-6bfa-4ba8-bc62-ba62eb8768cd" containerID="c306230a23b4dbe13f20ca0e3f9a278cbe67e476daf1854a60cacf5e82fe768e" exitCode=0 Jan 26 17:58:08 crc kubenswrapper[4787]: I0126 17:58:08.394193 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerDied","Data":"c306230a23b4dbe13f20ca0e3f9a278cbe67e476daf1854a60cacf5e82fe768e"} Jan 26 17:58:09 crc kubenswrapper[4787]: I0126 17:58:09.406025 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"e02061ee8d42f3b5c49880a480839f8ebf88bdeb6af4db42fbaee0f2cb7d0490"} Jan 26 17:58:09 crc kubenswrapper[4787]: I0126 17:58:09.406309 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"36f9e5adad35b0c70c6b2df6e0eb6365d2dd3bcf33d7a6b7b7f9212ca76cda58"} Jan 26 17:58:09 crc kubenswrapper[4787]: I0126 17:58:09.406322 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"6c81be3a64924e49b9cdacc03d31b5c4ef2da4c3eaf92dc1aeaf83cadb4567dd"} Jan 26 17:58:09 crc kubenswrapper[4787]: I0126 17:58:09.406332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"385dfa0137345bb2d968cc6d6939dde791c8ee38f63dda89a0db090050c9c30e"} Jan 26 17:58:09 crc kubenswrapper[4787]: I0126 17:58:09.406343 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"2f6f8403bd477e12953a4e6569c177edb474bdc63d7db06aedb0c95383ccbd63"} Jan 26 17:58:09 crc kubenswrapper[4787]: I0126 17:58:09.587087 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-f78r2" Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.417043 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lqwvw" event={"ID":"02143b0c-6bfa-4ba8-bc62-ba62eb8768cd","Type":"ContainerStarted","Data":"7c2cc1a1d08a19d6598ded1195149abf1db3f0f056b73c1aadaa0b2add4ac78c"} Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.417247 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.438536 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lqwvw" podStartSLOduration=6.219617039 podStartE2EDuration="13.438516834s" podCreationTimestamp="2026-01-26 17:57:57 +0000 UTC" firstStartedPulling="2026-01-26 17:57:58.7342494 +0000 UTC m=+847.441385523" lastFinishedPulling="2026-01-26 17:58:05.953149185 +0000 UTC m=+854.660285318" observedRunningTime="2026-01-26 17:58:10.437573354 +0000 UTC m=+859.144709487" watchObservedRunningTime="2026-01-26 17:58:10.438516834 +0000 UTC m=+859.145652957" Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.927992 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z"] Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.929866 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.932231 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 17:58:10 crc kubenswrapper[4787]: I0126 17:58:10.944623 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z"] Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.005016 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rp2\" (UniqueName: \"kubernetes.io/projected/b6c4cb41-668b-482d-a752-13aa73c5ab8f-kube-api-access-v7rp2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.005091 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.005219 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.106569 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.106711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.106796 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7rp2\" (UniqueName: \"kubernetes.io/projected/b6c4cb41-668b-482d-a752-13aa73c5ab8f-kube-api-access-v7rp2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.107150 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.107235 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.130997 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7rp2\" (UniqueName: \"kubernetes.io/projected/b6c4cb41-668b-482d-a752-13aa73c5ab8f-kube-api-access-v7rp2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.244373 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:11 crc kubenswrapper[4787]: I0126 17:58:11.796865 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z"] Jan 26 17:58:11 crc kubenswrapper[4787]: W0126 17:58:11.801482 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6c4cb41_668b_482d_a752_13aa73c5ab8f.slice/crio-6016f88e686caa39cc34f8f20c9ff9847a269532509c1afa21d956dccfbaa68b WatchSource:0}: Error finding container 6016f88e686caa39cc34f8f20c9ff9847a269532509c1afa21d956dccfbaa68b: Status 404 returned error can't find the container with id 6016f88e686caa39cc34f8f20c9ff9847a269532509c1afa21d956dccfbaa68b Jan 26 17:58:12 crc kubenswrapper[4787]: I0126 17:58:12.463812 4787 generic.go:334] "Generic (PLEG): container finished" podID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerID="1564804c95a26f43325b783951b04e5e5698ea258140183b76f9c5d255dac6ec" exitCode=0 Jan 26 17:58:12 crc kubenswrapper[4787]: I0126 17:58:12.463870 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" event={"ID":"b6c4cb41-668b-482d-a752-13aa73c5ab8f","Type":"ContainerDied","Data":"1564804c95a26f43325b783951b04e5e5698ea258140183b76f9c5d255dac6ec"} Jan 26 17:58:12 crc kubenswrapper[4787]: I0126 17:58:12.463903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" event={"ID":"b6c4cb41-668b-482d-a752-13aa73c5ab8f","Type":"ContainerStarted","Data":"6016f88e686caa39cc34f8f20c9ff9847a269532509c1afa21d956dccfbaa68b"} Jan 26 17:58:13 crc kubenswrapper[4787]: I0126 17:58:13.623655 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:58:13 crc kubenswrapper[4787]: I0126 17:58:13.662429 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:58:17 crc kubenswrapper[4787]: I0126 17:58:17.499004 4787 generic.go:334] "Generic (PLEG): container finished" podID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerID="47502fc68d16b31383763f179e0d4d14e6cb34698e6904349c1b5a97fe20776c" exitCode=0 Jan 26 17:58:17 crc kubenswrapper[4787]: I0126 17:58:17.499209 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" event={"ID":"b6c4cb41-668b-482d-a752-13aa73c5ab8f","Type":"ContainerDied","Data":"47502fc68d16b31383763f179e0d4d14e6cb34698e6904349c1b5a97fe20776c"} Jan 26 17:58:18 crc kubenswrapper[4787]: I0126 17:58:18.598230 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sdpp9" Jan 26 17:58:18 crc kubenswrapper[4787]: I0126 17:58:18.624875 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lqwvw" Jan 26 17:58:19 crc kubenswrapper[4787]: I0126 17:58:19.512837 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" event={"ID":"b6c4cb41-668b-482d-a752-13aa73c5ab8f","Type":"ContainerStarted","Data":"9c761297aa502d6362024d3fd09ceca484084fbd9e15a16cc89b382694871b94"} Jan 26 17:58:19 crc kubenswrapper[4787]: I0126 17:58:19.531611 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" podStartSLOduration=5.387166488 podStartE2EDuration="9.531595774s" podCreationTimestamp="2026-01-26 17:58:10 +0000 UTC" firstStartedPulling="2026-01-26 17:58:12.465407108 +0000 UTC m=+861.172543261" lastFinishedPulling="2026-01-26 17:58:16.609836404 +0000 UTC m=+865.316972547" observedRunningTime="2026-01-26 17:58:19.529446318 +0000 UTC m=+868.236582441" watchObservedRunningTime="2026-01-26 17:58:19.531595774 +0000 UTC m=+868.238731907" Jan 26 17:58:20 crc kubenswrapper[4787]: I0126 17:58:20.520192 4787 generic.go:334] "Generic (PLEG): container finished" podID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerID="9c761297aa502d6362024d3fd09ceca484084fbd9e15a16cc89b382694871b94" exitCode=0 Jan 26 17:58:20 crc kubenswrapper[4787]: I0126 17:58:20.520240 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" event={"ID":"b6c4cb41-668b-482d-a752-13aa73c5ab8f","Type":"ContainerDied","Data":"9c761297aa502d6362024d3fd09ceca484084fbd9e15a16cc89b382694871b94"} Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.846059 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.942995 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-bundle\") pod \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.943042 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-util\") pod \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.943124 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7rp2\" (UniqueName: \"kubernetes.io/projected/b6c4cb41-668b-482d-a752-13aa73c5ab8f-kube-api-access-v7rp2\") pod \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\" (UID: \"b6c4cb41-668b-482d-a752-13aa73c5ab8f\") " Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.944542 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-bundle" (OuterVolumeSpecName: "bundle") pod "b6c4cb41-668b-482d-a752-13aa73c5ab8f" (UID: "b6c4cb41-668b-482d-a752-13aa73c5ab8f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.949163 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c4cb41-668b-482d-a752-13aa73c5ab8f-kube-api-access-v7rp2" (OuterVolumeSpecName: "kube-api-access-v7rp2") pod "b6c4cb41-668b-482d-a752-13aa73c5ab8f" (UID: "b6c4cb41-668b-482d-a752-13aa73c5ab8f"). InnerVolumeSpecName "kube-api-access-v7rp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:58:21 crc kubenswrapper[4787]: I0126 17:58:21.954134 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-util" (OuterVolumeSpecName: "util") pod "b6c4cb41-668b-482d-a752-13aa73c5ab8f" (UID: "b6c4cb41-668b-482d-a752-13aa73c5ab8f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:58:22 crc kubenswrapper[4787]: I0126 17:58:22.044245 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:58:22 crc kubenswrapper[4787]: I0126 17:58:22.044279 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6c4cb41-668b-482d-a752-13aa73c5ab8f-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:58:22 crc kubenswrapper[4787]: I0126 17:58:22.044291 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7rp2\" (UniqueName: \"kubernetes.io/projected/b6c4cb41-668b-482d-a752-13aa73c5ab8f-kube-api-access-v7rp2\") on node \"crc\" DevicePath \"\"" Jan 26 17:58:22 crc kubenswrapper[4787]: I0126 17:58:22.533989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" event={"ID":"b6c4cb41-668b-482d-a752-13aa73c5ab8f","Type":"ContainerDied","Data":"6016f88e686caa39cc34f8f20c9ff9847a269532509c1afa21d956dccfbaa68b"} Jan 26 17:58:22 crc kubenswrapper[4787]: I0126 17:58:22.534024 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6016f88e686caa39cc34f8f20c9ff9847a269532509c1afa21d956dccfbaa68b" Jan 26 17:58:22 crc kubenswrapper[4787]: I0126 17:58:22.534080 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.226089 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk"] Jan 26 17:58:29 crc kubenswrapper[4787]: E0126 17:58:29.227018 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="extract" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.227038 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="extract" Jan 26 17:58:29 crc kubenswrapper[4787]: E0126 17:58:29.227051 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="util" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.227058 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="util" Jan 26 17:58:29 crc kubenswrapper[4787]: E0126 17:58:29.227075 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="pull" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.227084 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="pull" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.227212 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c4cb41-668b-482d-a752-13aa73c5ab8f" containerName="extract" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.227712 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.230074 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.232094 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.236275 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8kth9" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.250101 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk"] Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.338935 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9pv\" (UniqueName: \"kubernetes.io/projected/4c084bc5-7201-4fe9-ae74-6877e0598d52-kube-api-access-sr9pv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-jsrxk\" (UID: \"4c084bc5-7201-4fe9-ae74-6877e0598d52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.339089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c084bc5-7201-4fe9-ae74-6877e0598d52-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-jsrxk\" (UID: \"4c084bc5-7201-4fe9-ae74-6877e0598d52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.439940 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9pv\" (UniqueName: \"kubernetes.io/projected/4c084bc5-7201-4fe9-ae74-6877e0598d52-kube-api-access-sr9pv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-jsrxk\" (UID: \"4c084bc5-7201-4fe9-ae74-6877e0598d52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.440043 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c084bc5-7201-4fe9-ae74-6877e0598d52-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-jsrxk\" (UID: \"4c084bc5-7201-4fe9-ae74-6877e0598d52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.440561 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c084bc5-7201-4fe9-ae74-6877e0598d52-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-jsrxk\" (UID: \"4c084bc5-7201-4fe9-ae74-6877e0598d52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.462406 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9pv\" (UniqueName: \"kubernetes.io/projected/4c084bc5-7201-4fe9-ae74-6877e0598d52-kube-api-access-sr9pv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-jsrxk\" (UID: \"4c084bc5-7201-4fe9-ae74-6877e0598d52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.546121 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" Jan 26 17:58:29 crc kubenswrapper[4787]: I0126 17:58:29.825393 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk"] Jan 26 17:58:29 crc kubenswrapper[4787]: W0126 17:58:29.827445 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c084bc5_7201_4fe9_ae74_6877e0598d52.slice/crio-ba7e45e6647f9912b12986ddf3bc7f383918b75f4a3c84e14b781a2a79e4f25c WatchSource:0}: Error finding container ba7e45e6647f9912b12986ddf3bc7f383918b75f4a3c84e14b781a2a79e4f25c: Status 404 returned error can't find the container with id ba7e45e6647f9912b12986ddf3bc7f383918b75f4a3c84e14b781a2a79e4f25c Jan 26 17:58:30 crc kubenswrapper[4787]: I0126 17:58:30.610559 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" event={"ID":"4c084bc5-7201-4fe9-ae74-6877e0598d52","Type":"ContainerStarted","Data":"ba7e45e6647f9912b12986ddf3bc7f383918b75f4a3c84e14b781a2a79e4f25c"} Jan 26 17:58:39 crc kubenswrapper[4787]: I0126 17:58:39.700045 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" event={"ID":"4c084bc5-7201-4fe9-ae74-6877e0598d52","Type":"ContainerStarted","Data":"e87006b4a26ad53a826e0aae3941b42b60fa493cbe28652d98c4ac676abb8bf2"} Jan 26 17:58:39 crc kubenswrapper[4787]: I0126 17:58:39.731796 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-jsrxk" podStartSLOduration=1.07155653 podStartE2EDuration="10.731780571s" podCreationTimestamp="2026-01-26 17:58:29 +0000 UTC" firstStartedPulling="2026-01-26 17:58:29.830545099 +0000 UTC m=+878.537681232" lastFinishedPulling="2026-01-26 17:58:39.49076915 +0000 UTC m=+888.197905273" observedRunningTime="2026-01-26 17:58:39.727052641 +0000 UTC m=+888.434188774" watchObservedRunningTime="2026-01-26 17:58:39.731780571 +0000 UTC m=+888.438916724" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.188456 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-l8878"] Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.189970 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.191992 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.192254 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7jgpv" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.192286 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.204588 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-l8878"] Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.363078 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqng\" (UniqueName: \"kubernetes.io/projected/9514fbda-b64d-456b-afdc-400a97c84beb-kube-api-access-jxqng\") pod \"cert-manager-cainjector-855d9ccff4-l8878\" (UID: \"9514fbda-b64d-456b-afdc-400a97c84beb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.363130 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9514fbda-b64d-456b-afdc-400a97c84beb-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-l8878\" (UID: \"9514fbda-b64d-456b-afdc-400a97c84beb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.464392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqng\" (UniqueName: \"kubernetes.io/projected/9514fbda-b64d-456b-afdc-400a97c84beb-kube-api-access-jxqng\") pod \"cert-manager-cainjector-855d9ccff4-l8878\" (UID: \"9514fbda-b64d-456b-afdc-400a97c84beb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.464462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9514fbda-b64d-456b-afdc-400a97c84beb-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-l8878\" (UID: \"9514fbda-b64d-456b-afdc-400a97c84beb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.483775 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqng\" (UniqueName: \"kubernetes.io/projected/9514fbda-b64d-456b-afdc-400a97c84beb-kube-api-access-jxqng\") pod \"cert-manager-cainjector-855d9ccff4-l8878\" (UID: \"9514fbda-b64d-456b-afdc-400a97c84beb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.483869 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9514fbda-b64d-456b-afdc-400a97c84beb-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-l8878\" (UID: \"9514fbda-b64d-456b-afdc-400a97c84beb\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.518140 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" Jan 26 17:58:47 crc kubenswrapper[4787]: I0126 17:58:47.924193 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-l8878"] Jan 26 17:58:48 crc kubenswrapper[4787]: I0126 17:58:48.753271 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" event={"ID":"9514fbda-b64d-456b-afdc-400a97c84beb","Type":"ContainerStarted","Data":"a3ed221850324a76e17edbb96c200d4194c7e49b24f27d29f76f10087dc37e0a"} Jan 26 17:58:49 crc kubenswrapper[4787]: I0126 17:58:49.816162 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zds8t"] Jan 26 17:58:49 crc kubenswrapper[4787]: I0126 17:58:49.816828 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:49 crc kubenswrapper[4787]: I0126 17:58:49.819159 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fd8c5" Jan 26 17:58:49 crc kubenswrapper[4787]: I0126 17:58:49.828658 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zds8t"] Jan 26 17:58:49 crc kubenswrapper[4787]: I0126 17:58:49.998167 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dv77\" (UniqueName: \"kubernetes.io/projected/377a7b82-6a6f-4377-b05e-70fddbedaa1e-kube-api-access-5dv77\") pod \"cert-manager-webhook-f4fb5df64-zds8t\" (UID: \"377a7b82-6a6f-4377-b05e-70fddbedaa1e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:49 crc kubenswrapper[4787]: I0126 17:58:49.998221 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377a7b82-6a6f-4377-b05e-70fddbedaa1e-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zds8t\" (UID: \"377a7b82-6a6f-4377-b05e-70fddbedaa1e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.099782 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377a7b82-6a6f-4377-b05e-70fddbedaa1e-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zds8t\" (UID: \"377a7b82-6a6f-4377-b05e-70fddbedaa1e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.100358 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dv77\" (UniqueName: \"kubernetes.io/projected/377a7b82-6a6f-4377-b05e-70fddbedaa1e-kube-api-access-5dv77\") pod \"cert-manager-webhook-f4fb5df64-zds8t\" (UID: \"377a7b82-6a6f-4377-b05e-70fddbedaa1e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.123214 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dv77\" (UniqueName: \"kubernetes.io/projected/377a7b82-6a6f-4377-b05e-70fddbedaa1e-kube-api-access-5dv77\") pod \"cert-manager-webhook-f4fb5df64-zds8t\" (UID: \"377a7b82-6a6f-4377-b05e-70fddbedaa1e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.134003 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377a7b82-6a6f-4377-b05e-70fddbedaa1e-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zds8t\" (UID: \"377a7b82-6a6f-4377-b05e-70fddbedaa1e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.143249 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.619333 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zds8t"] Jan 26 17:58:50 crc kubenswrapper[4787]: I0126 17:58:50.767034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" event={"ID":"377a7b82-6a6f-4377-b05e-70fddbedaa1e","Type":"ContainerStarted","Data":"66ae4b2900df1a0a573f45856223ad9d5cf5e45dda385c65a65499c4f031ab09"} Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.696556 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-mlsxn"] Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.698084 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.701504 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-h9rqv" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.721856 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-mlsxn"] Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.764316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmk5h\" (UniqueName: \"kubernetes.io/projected/e1c7699f-430a-4049-ad92-19240c15d2ba-kube-api-access-zmk5h\") pod \"cert-manager-86cb77c54b-mlsxn\" (UID: \"e1c7699f-430a-4049-ad92-19240c15d2ba\") " pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.764366 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1c7699f-430a-4049-ad92-19240c15d2ba-bound-sa-token\") pod \"cert-manager-86cb77c54b-mlsxn\" (UID: \"e1c7699f-430a-4049-ad92-19240c15d2ba\") " pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.865537 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmk5h\" (UniqueName: \"kubernetes.io/projected/e1c7699f-430a-4049-ad92-19240c15d2ba-kube-api-access-zmk5h\") pod \"cert-manager-86cb77c54b-mlsxn\" (UID: \"e1c7699f-430a-4049-ad92-19240c15d2ba\") " pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.865586 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1c7699f-430a-4049-ad92-19240c15d2ba-bound-sa-token\") pod \"cert-manager-86cb77c54b-mlsxn\" (UID: \"e1c7699f-430a-4049-ad92-19240c15d2ba\") " pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.883215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1c7699f-430a-4049-ad92-19240c15d2ba-bound-sa-token\") pod \"cert-manager-86cb77c54b-mlsxn\" (UID: \"e1c7699f-430a-4049-ad92-19240c15d2ba\") " pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:54 crc kubenswrapper[4787]: I0126 17:58:54.883632 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmk5h\" (UniqueName: \"kubernetes.io/projected/e1c7699f-430a-4049-ad92-19240c15d2ba-kube-api-access-zmk5h\") pod \"cert-manager-86cb77c54b-mlsxn\" (UID: \"e1c7699f-430a-4049-ad92-19240c15d2ba\") " pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.020834 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-mlsxn" Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.398125 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-mlsxn"] Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.798922 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-mlsxn" event={"ID":"e1c7699f-430a-4049-ad92-19240c15d2ba","Type":"ContainerStarted","Data":"dddbc593fe0d1ac9931f612219bed6facae99f617db388f03ff0da59e4bc2614"} Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.799311 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-mlsxn" event={"ID":"e1c7699f-430a-4049-ad92-19240c15d2ba","Type":"ContainerStarted","Data":"d3275fe7f8c82dc38393c929f6b2654492e08bdcc3dc32c6ddabb4c34297304a"} Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.800372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" event={"ID":"9514fbda-b64d-456b-afdc-400a97c84beb","Type":"ContainerStarted","Data":"bfaf9215b354d3612665a25b9f1bd521c3241196b38283578e182e6668a18eea"} Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.801458 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" event={"ID":"377a7b82-6a6f-4377-b05e-70fddbedaa1e","Type":"ContainerStarted","Data":"eb85e08c1727659ad66bf2e5149df465df03bdd0541f176160a097c9a4ae52b8"} Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.801631 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.815716 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-mlsxn" podStartSLOduration=1.815702864 podStartE2EDuration="1.815702864s" podCreationTimestamp="2026-01-26 17:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:58:55.814244357 +0000 UTC m=+904.521380490" watchObservedRunningTime="2026-01-26 17:58:55.815702864 +0000 UTC m=+904.522838997" Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.842417 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-l8878" podStartSLOduration=1.5880651270000001 podStartE2EDuration="8.842402014s" podCreationTimestamp="2026-01-26 17:58:47 +0000 UTC" firstStartedPulling="2026-01-26 17:58:47.929537917 +0000 UTC m=+896.636674050" lastFinishedPulling="2026-01-26 17:58:55.183874804 +0000 UTC m=+903.891010937" observedRunningTime="2026-01-26 17:58:55.838974906 +0000 UTC m=+904.546111039" watchObservedRunningTime="2026-01-26 17:58:55.842402014 +0000 UTC m=+904.549538147" Jan 26 17:58:55 crc kubenswrapper[4787]: I0126 17:58:55.856085 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" podStartSLOduration=2.32259873 podStartE2EDuration="6.856064252s" podCreationTimestamp="2026-01-26 17:58:49 +0000 UTC" firstStartedPulling="2026-01-26 17:58:50.647837606 +0000 UTC m=+899.354973729" lastFinishedPulling="2026-01-26 17:58:55.181303118 +0000 UTC m=+903.888439251" observedRunningTime="2026-01-26 17:58:55.85320375 +0000 UTC m=+904.560339883" watchObservedRunningTime="2026-01-26 17:58:55.856064252 +0000 UTC m=+904.563200395" Jan 26 17:59:00 crc kubenswrapper[4787]: I0126 17:59:00.145968 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-zds8t" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.238427 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j28dc"] Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.239866 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.241652 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.244920 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qpkhm" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.244923 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.249383 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j28dc"] Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.303246 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7l6j\" (UniqueName: \"kubernetes.io/projected/484e6f74-ec79-469c-a399-9a25b1acbf7b-kube-api-access-x7l6j\") pod \"openstack-operator-index-j28dc\" (UID: \"484e6f74-ec79-469c-a399-9a25b1acbf7b\") " pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.404985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7l6j\" (UniqueName: \"kubernetes.io/projected/484e6f74-ec79-469c-a399-9a25b1acbf7b-kube-api-access-x7l6j\") pod \"openstack-operator-index-j28dc\" (UID: \"484e6f74-ec79-469c-a399-9a25b1acbf7b\") " pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.423995 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7l6j\" (UniqueName: \"kubernetes.io/projected/484e6f74-ec79-469c-a399-9a25b1acbf7b-kube-api-access-x7l6j\") pod \"openstack-operator-index-j28dc\" (UID: \"484e6f74-ec79-469c-a399-9a25b1acbf7b\") " pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.557000 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.808772 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j28dc"] Jan 26 17:59:03 crc kubenswrapper[4787]: I0126 17:59:03.861158 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j28dc" event={"ID":"484e6f74-ec79-469c-a399-9a25b1acbf7b","Type":"ContainerStarted","Data":"966aae227fe37e4fa60590044473141543a64099de9393c65768e1f47ac18333"} Jan 26 17:59:06 crc kubenswrapper[4787]: I0126 17:59:06.613928 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-j28dc"] Jan 26 17:59:06 crc kubenswrapper[4787]: I0126 17:59:06.883483 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j28dc" event={"ID":"484e6f74-ec79-469c-a399-9a25b1acbf7b","Type":"ContainerStarted","Data":"43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700"} Jan 26 17:59:06 crc kubenswrapper[4787]: I0126 17:59:06.883586 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-j28dc" podUID="484e6f74-ec79-469c-a399-9a25b1acbf7b" containerName="registry-server" containerID="cri-o://43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700" gracePeriod=2 Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.216027 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j28dc" podStartSLOduration=1.7517372660000001 podStartE2EDuration="4.216005761s" podCreationTimestamp="2026-01-26 17:59:03 +0000 UTC" firstStartedPulling="2026-01-26 17:59:03.822430111 +0000 UTC m=+912.529566244" lastFinishedPulling="2026-01-26 17:59:06.286698606 +0000 UTC m=+914.993834739" observedRunningTime="2026-01-26 17:59:06.905316181 +0000 UTC m=+915.612452334" watchObservedRunningTime="2026-01-26 17:59:07.216005761 +0000 UTC m=+915.923141914" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.221939 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9zpn7"] Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.222705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.228196 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9zpn7"] Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.230633 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.254659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7l6j\" (UniqueName: \"kubernetes.io/projected/484e6f74-ec79-469c-a399-9a25b1acbf7b-kube-api-access-x7l6j\") pod \"484e6f74-ec79-469c-a399-9a25b1acbf7b\" (UID: \"484e6f74-ec79-469c-a399-9a25b1acbf7b\") " Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.254981 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgnq\" (UniqueName: \"kubernetes.io/projected/dc3edcd9-ded9-49ab-bed4-90a69169cf3f-kube-api-access-swgnq\") pod \"openstack-operator-index-9zpn7\" (UID: \"dc3edcd9-ded9-49ab-bed4-90a69169cf3f\") " pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.260673 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484e6f74-ec79-469c-a399-9a25b1acbf7b-kube-api-access-x7l6j" (OuterVolumeSpecName: "kube-api-access-x7l6j") pod "484e6f74-ec79-469c-a399-9a25b1acbf7b" (UID: "484e6f74-ec79-469c-a399-9a25b1acbf7b"). InnerVolumeSpecName "kube-api-access-x7l6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.355840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgnq\" (UniqueName: \"kubernetes.io/projected/dc3edcd9-ded9-49ab-bed4-90a69169cf3f-kube-api-access-swgnq\") pod \"openstack-operator-index-9zpn7\" (UID: \"dc3edcd9-ded9-49ab-bed4-90a69169cf3f\") " pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.355982 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7l6j\" (UniqueName: \"kubernetes.io/projected/484e6f74-ec79-469c-a399-9a25b1acbf7b-kube-api-access-x7l6j\") on node \"crc\" DevicePath \"\"" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.374318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgnq\" (UniqueName: \"kubernetes.io/projected/dc3edcd9-ded9-49ab-bed4-90a69169cf3f-kube-api-access-swgnq\") pod \"openstack-operator-index-9zpn7\" (UID: \"dc3edcd9-ded9-49ab-bed4-90a69169cf3f\") " pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.548577 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.893880 4787 generic.go:334] "Generic (PLEG): container finished" podID="484e6f74-ec79-469c-a399-9a25b1acbf7b" containerID="43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700" exitCode=0 Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.894027 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j28dc" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.894018 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j28dc" event={"ID":"484e6f74-ec79-469c-a399-9a25b1acbf7b","Type":"ContainerDied","Data":"43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700"} Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.894512 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j28dc" event={"ID":"484e6f74-ec79-469c-a399-9a25b1acbf7b","Type":"ContainerDied","Data":"966aae227fe37e4fa60590044473141543a64099de9393c65768e1f47ac18333"} Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.894572 4787 scope.go:117] "RemoveContainer" containerID="43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.925130 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-j28dc"] Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.925994 4787 scope.go:117] "RemoveContainer" containerID="43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700" Jan 26 17:59:07 crc kubenswrapper[4787]: E0126 17:59:07.926534 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700\": container with ID starting with 43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700 not found: ID does not exist" containerID="43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.926574 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700"} err="failed to get container status \"43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700\": rpc error: code = NotFound desc = could not find container \"43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700\": container with ID starting with 43494276d3d9cb733dd7283f5f7de0b3ce85cd2111679a1d6760ac67bcf9a700 not found: ID does not exist" Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.930744 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-j28dc"] Jan 26 17:59:07 crc kubenswrapper[4787]: I0126 17:59:07.945754 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9zpn7"] Jan 26 17:59:08 crc kubenswrapper[4787]: I0126 17:59:08.906760 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9zpn7" event={"ID":"dc3edcd9-ded9-49ab-bed4-90a69169cf3f","Type":"ContainerStarted","Data":"4f48fdd85aa67fb885dcace4f4b1f73645b29e46d9e2f607da5cd1e730d9b4c5"} Jan 26 17:59:08 crc kubenswrapper[4787]: I0126 17:59:08.907539 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9zpn7" event={"ID":"dc3edcd9-ded9-49ab-bed4-90a69169cf3f","Type":"ContainerStarted","Data":"4ba15f90bfe1ba71063778bbc05a598b2755c87a939e1296f35fe0038fa93405"} Jan 26 17:59:08 crc kubenswrapper[4787]: I0126 17:59:08.928806 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9zpn7" podStartSLOduration=1.515301596 podStartE2EDuration="1.928789634s" podCreationTimestamp="2026-01-26 17:59:07 +0000 UTC" firstStartedPulling="2026-01-26 17:59:07.954567491 +0000 UTC m=+916.661703614" lastFinishedPulling="2026-01-26 17:59:08.368055519 +0000 UTC m=+917.075191652" observedRunningTime="2026-01-26 17:59:08.92590959 +0000 UTC m=+917.633045733" watchObservedRunningTime="2026-01-26 17:59:08.928789634 +0000 UTC m=+917.635925757" Jan 26 17:59:09 crc kubenswrapper[4787]: I0126 17:59:09.596381 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484e6f74-ec79-469c-a399-9a25b1acbf7b" path="/var/lib/kubelet/pods/484e6f74-ec79-469c-a399-9a25b1acbf7b/volumes" Jan 26 17:59:17 crc kubenswrapper[4787]: I0126 17:59:17.549594 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:17 crc kubenswrapper[4787]: I0126 17:59:17.550197 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:17 crc kubenswrapper[4787]: I0126 17:59:17.576977 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:17 crc kubenswrapper[4787]: I0126 17:59:17.997447 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9zpn7" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.867099 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p"] Jan 26 17:59:18 crc kubenswrapper[4787]: E0126 17:59:18.867695 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484e6f74-ec79-469c-a399-9a25b1acbf7b" containerName="registry-server" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.867712 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="484e6f74-ec79-469c-a399-9a25b1acbf7b" containerName="registry-server" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.867835 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="484e6f74-ec79-469c-a399-9a25b1acbf7b" containerName="registry-server" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.868822 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.872176 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hllq9" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.878888 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p"] Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.902492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcq2v\" (UniqueName: \"kubernetes.io/projected/a27fe4be-d228-472f-9b3f-f6e7a938e264-kube-api-access-jcq2v\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.902567 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-util\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:18 crc kubenswrapper[4787]: I0126 17:59:18.902600 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-bundle\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.004002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcq2v\" (UniqueName: \"kubernetes.io/projected/a27fe4be-d228-472f-9b3f-f6e7a938e264-kube-api-access-jcq2v\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.004080 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-util\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.004099 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-bundle\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.004587 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-bundle\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.004629 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-util\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.026129 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcq2v\" (UniqueName: \"kubernetes.io/projected/a27fe4be-d228-472f-9b3f-f6e7a938e264-kube-api-access-jcq2v\") pod \"bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.186524 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.576926 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p"] Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.978084 4787 generic.go:334] "Generic (PLEG): container finished" podID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerID="1b86b3a6b81d662b5cd69035a8f5e0ed56a8f4ba758e6ed77f2441293f222642" exitCode=0 Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.978269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" event={"ID":"a27fe4be-d228-472f-9b3f-f6e7a938e264","Type":"ContainerDied","Data":"1b86b3a6b81d662b5cd69035a8f5e0ed56a8f4ba758e6ed77f2441293f222642"} Jan 26 17:59:19 crc kubenswrapper[4787]: I0126 17:59:19.978464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" event={"ID":"a27fe4be-d228-472f-9b3f-f6e7a938e264","Type":"ContainerStarted","Data":"ca759ce765a92a2721e29ecb9ab5c78209788ef01fa6f95442adb11661186f77"} Jan 26 17:59:20 crc kubenswrapper[4787]: I0126 17:59:20.986444 4787 generic.go:334] "Generic (PLEG): container finished" podID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerID="cefbd4e61cc1387f2e581f367b17ee3606f6ed94dec9c456785626544645ca87" exitCode=0 Jan 26 17:59:20 crc kubenswrapper[4787]: I0126 17:59:20.986561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" event={"ID":"a27fe4be-d228-472f-9b3f-f6e7a938e264","Type":"ContainerDied","Data":"cefbd4e61cc1387f2e581f367b17ee3606f6ed94dec9c456785626544645ca87"} Jan 26 17:59:21 crc kubenswrapper[4787]: I0126 17:59:21.996704 4787 generic.go:334] "Generic (PLEG): container finished" podID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerID="9efe086a9fb3979e41ef97616218946961743d528f631a7f35f53c5674e7d580" exitCode=0 Jan 26 17:59:21 crc kubenswrapper[4787]: I0126 17:59:21.996746 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" event={"ID":"a27fe4be-d228-472f-9b3f-f6e7a938e264","Type":"ContainerDied","Data":"9efe086a9fb3979e41ef97616218946961743d528f631a7f35f53c5674e7d580"} Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.250429 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.378155 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-bundle\") pod \"a27fe4be-d228-472f-9b3f-f6e7a938e264\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.378336 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcq2v\" (UniqueName: \"kubernetes.io/projected/a27fe4be-d228-472f-9b3f-f6e7a938e264-kube-api-access-jcq2v\") pod \"a27fe4be-d228-472f-9b3f-f6e7a938e264\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.378373 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-util\") pod \"a27fe4be-d228-472f-9b3f-f6e7a938e264\" (UID: \"a27fe4be-d228-472f-9b3f-f6e7a938e264\") " Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.380428 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-bundle" (OuterVolumeSpecName: "bundle") pod "a27fe4be-d228-472f-9b3f-f6e7a938e264" (UID: "a27fe4be-d228-472f-9b3f-f6e7a938e264"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.388314 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27fe4be-d228-472f-9b3f-f6e7a938e264-kube-api-access-jcq2v" (OuterVolumeSpecName: "kube-api-access-jcq2v") pod "a27fe4be-d228-472f-9b3f-f6e7a938e264" (UID: "a27fe4be-d228-472f-9b3f-f6e7a938e264"). InnerVolumeSpecName "kube-api-access-jcq2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.394214 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-util" (OuterVolumeSpecName: "util") pod "a27fe4be-d228-472f-9b3f-f6e7a938e264" (UID: "a27fe4be-d228-472f-9b3f-f6e7a938e264"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.480053 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcq2v\" (UniqueName: \"kubernetes.io/projected/a27fe4be-d228-472f-9b3f-f6e7a938e264-kube-api-access-jcq2v\") on node \"crc\" DevicePath \"\"" Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.480102 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:59:23 crc kubenswrapper[4787]: I0126 17:59:23.480112 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27fe4be-d228-472f-9b3f-f6e7a938e264-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:59:24 crc kubenswrapper[4787]: I0126 17:59:24.015508 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" event={"ID":"a27fe4be-d228-472f-9b3f-f6e7a938e264","Type":"ContainerDied","Data":"ca759ce765a92a2721e29ecb9ab5c78209788ef01fa6f95442adb11661186f77"} Jan 26 17:59:24 crc kubenswrapper[4787]: I0126 17:59:24.015553 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca759ce765a92a2721e29ecb9ab5c78209788ef01fa6f95442adb11661186f77" Jan 26 17:59:24 crc kubenswrapper[4787]: I0126 17:59:24.015633 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.196339 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc"] Jan 26 17:59:31 crc kubenswrapper[4787]: E0126 17:59:31.197193 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="pull" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.197210 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="pull" Jan 26 17:59:31 crc kubenswrapper[4787]: E0126 17:59:31.197230 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="extract" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.197264 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="extract" Jan 26 17:59:31 crc kubenswrapper[4787]: E0126 17:59:31.197279 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="util" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.197287 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="util" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.197420 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27fe4be-d228-472f-9b3f-f6e7a938e264" containerName="extract" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.197924 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.199808 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mp88b" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.215976 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc"] Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.281815 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkb56\" (UniqueName: \"kubernetes.io/projected/7a37d9dd-cbf7-4a37-980b-c7e6a455703e-kube-api-access-rkb56\") pod \"openstack-operator-controller-init-6c56cb8cbf-24hlc\" (UID: \"7a37d9dd-cbf7-4a37-980b-c7e6a455703e\") " pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.383618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkb56\" (UniqueName: \"kubernetes.io/projected/7a37d9dd-cbf7-4a37-980b-c7e6a455703e-kube-api-access-rkb56\") pod \"openstack-operator-controller-init-6c56cb8cbf-24hlc\" (UID: \"7a37d9dd-cbf7-4a37-980b-c7e6a455703e\") " pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.404619 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkb56\" (UniqueName: \"kubernetes.io/projected/7a37d9dd-cbf7-4a37-980b-c7e6a455703e-kube-api-access-rkb56\") pod \"openstack-operator-controller-init-6c56cb8cbf-24hlc\" (UID: \"7a37d9dd-cbf7-4a37-980b-c7e6a455703e\") " pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.532907 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:31 crc kubenswrapper[4787]: I0126 17:59:31.990381 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc"] Jan 26 17:59:32 crc kubenswrapper[4787]: I0126 17:59:32.064163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" event={"ID":"7a37d9dd-cbf7-4a37-980b-c7e6a455703e","Type":"ContainerStarted","Data":"581cb02ff4cfa72e4ce5ba07d34d0cd52c43e25bd5f21463cdd2136f86a3d970"} Jan 26 17:59:36 crc kubenswrapper[4787]: I0126 17:59:36.871088 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pzdpn"] Jan 26 17:59:36 crc kubenswrapper[4787]: I0126 17:59:36.874411 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:36 crc kubenswrapper[4787]: I0126 17:59:36.881509 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzdpn"] Jan 26 17:59:36 crc kubenswrapper[4787]: I0126 17:59:36.972724 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-utilities\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:36 crc kubenswrapper[4787]: I0126 17:59:36.972834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsn7c\" (UniqueName: \"kubernetes.io/projected/8fe60505-e284-4dc0-a89a-523a953f9c26-kube-api-access-lsn7c\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:36 crc kubenswrapper[4787]: I0126 17:59:36.973113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-catalog-content\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.074749 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-utilities\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.074809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsn7c\" (UniqueName: \"kubernetes.io/projected/8fe60505-e284-4dc0-a89a-523a953f9c26-kube-api-access-lsn7c\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.074858 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-catalog-content\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.075476 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-utilities\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.075488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-catalog-content\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.094613 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsn7c\" (UniqueName: \"kubernetes.io/projected/8fe60505-e284-4dc0-a89a-523a953f9c26-kube-api-access-lsn7c\") pod \"certified-operators-pzdpn\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:37 crc kubenswrapper[4787]: I0126 17:59:37.196305 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:38 crc kubenswrapper[4787]: I0126 17:59:38.102791 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzdpn"] Jan 26 17:59:40 crc kubenswrapper[4787]: I0126 17:59:40.120350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerStarted","Data":"fdf2ef8cf19f14b34369348e5cc4cb68fec6780e4a34a598e294b2eadfe04de4"} Jan 26 17:59:40 crc kubenswrapper[4787]: I0126 17:59:40.121214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerStarted","Data":"f46001f34f425a7550b2cf03812a02b2a9e0c9b9616f027b06f5b47acd8e67aa"} Jan 26 17:59:40 crc kubenswrapper[4787]: I0126 17:59:40.122283 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" event={"ID":"7a37d9dd-cbf7-4a37-980b-c7e6a455703e","Type":"ContainerStarted","Data":"8c2e55c6335c418986bc2126f1fc761ca0c742dd891f41bfb7e2f0e45c5fc85d"} Jan 26 17:59:43 crc kubenswrapper[4787]: I0126 17:59:43.358301 4787 generic.go:334] "Generic (PLEG): container finished" podID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerID="fdf2ef8cf19f14b34369348e5cc4cb68fec6780e4a34a598e294b2eadfe04de4" exitCode=0 Jan 26 17:59:43 crc kubenswrapper[4787]: I0126 17:59:43.359986 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerDied","Data":"fdf2ef8cf19f14b34369348e5cc4cb68fec6780e4a34a598e294b2eadfe04de4"} Jan 26 17:59:43 crc kubenswrapper[4787]: I0126 17:59:43.360025 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:43 crc kubenswrapper[4787]: I0126 17:59:43.427127 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" podStartSLOduration=4.923810296 podStartE2EDuration="12.427096612s" podCreationTimestamp="2026-01-26 17:59:31 +0000 UTC" firstStartedPulling="2026-01-26 17:59:31.997078299 +0000 UTC m=+940.704214432" lastFinishedPulling="2026-01-26 17:59:39.500364615 +0000 UTC m=+948.207500748" observedRunningTime="2026-01-26 17:59:43.425455254 +0000 UTC m=+952.132591387" watchObservedRunningTime="2026-01-26 17:59:43.427096612 +0000 UTC m=+952.134232745" Jan 26 17:59:44 crc kubenswrapper[4787]: I0126 17:59:44.370672 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6c56cb8cbf-24hlc" Jan 26 17:59:45 crc kubenswrapper[4787]: I0126 17:59:45.373474 4787 generic.go:334] "Generic (PLEG): container finished" podID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerID="71c500d3ef0db98b10d32906195dd0da47e23acab2836cd8cf14e334e020f2d5" exitCode=0 Jan 26 17:59:45 crc kubenswrapper[4787]: I0126 17:59:45.373531 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerDied","Data":"71c500d3ef0db98b10d32906195dd0da47e23acab2836cd8cf14e334e020f2d5"} Jan 26 17:59:46 crc kubenswrapper[4787]: I0126 17:59:46.400882 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerStarted","Data":"999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245"} Jan 26 17:59:46 crc kubenswrapper[4787]: I0126 17:59:46.426547 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pzdpn" podStartSLOduration=7.844344615 podStartE2EDuration="10.426518198s" podCreationTimestamp="2026-01-26 17:59:36 +0000 UTC" firstStartedPulling="2026-01-26 17:59:43.363100014 +0000 UTC m=+952.070236147" lastFinishedPulling="2026-01-26 17:59:45.945273587 +0000 UTC m=+954.652409730" observedRunningTime="2026-01-26 17:59:46.423915929 +0000 UTC m=+955.131052062" watchObservedRunningTime="2026-01-26 17:59:46.426518198 +0000 UTC m=+955.133654371" Jan 26 17:59:47 crc kubenswrapper[4787]: I0126 17:59:47.197373 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:47 crc kubenswrapper[4787]: I0126 17:59:47.197720 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:48 crc kubenswrapper[4787]: I0126 17:59:48.239174 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pzdpn" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="registry-server" probeResult="failure" output=< Jan 26 17:59:48 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 17:59:48 crc kubenswrapper[4787]: > Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.268289 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9gwtn"] Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.270251 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.284199 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gwtn"] Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.321712 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-catalog-content\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.321861 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-utilities\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.321931 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292fk\" (UniqueName: \"kubernetes.io/projected/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-kube-api-access-292fk\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.423162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-catalog-content\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.423219 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-utilities\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.423266 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292fk\" (UniqueName: \"kubernetes.io/projected/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-kube-api-access-292fk\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.423724 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-catalog-content\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.423849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-utilities\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.445398 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292fk\" (UniqueName: \"kubernetes.io/projected/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-kube-api-access-292fk\") pod \"redhat-marketplace-9gwtn\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.587540 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.669494 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-br6x5"] Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.670993 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.681358 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-br6x5"] Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.829723 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-catalog-content\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.830125 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j522g\" (UniqueName: \"kubernetes.io/projected/02a84a32-50ab-4e52-a923-62f079ecec13-kube-api-access-j522g\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.830144 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-utilities\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.923311 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gwtn"] Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.940699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j522g\" (UniqueName: \"kubernetes.io/projected/02a84a32-50ab-4e52-a923-62f079ecec13-kube-api-access-j522g\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.940750 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-utilities\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.940787 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-catalog-content\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.941272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-catalog-content\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.941393 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-utilities\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.965041 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j522g\" (UniqueName: \"kubernetes.io/projected/02a84a32-50ab-4e52-a923-62f079ecec13-kube-api-access-j522g\") pod \"community-operators-br6x5\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:56 crc kubenswrapper[4787]: I0126 17:59:56.990094 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br6x5" Jan 26 17:59:57 crc kubenswrapper[4787]: I0126 17:59:57.268736 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:57 crc kubenswrapper[4787]: I0126 17:59:57.335178 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 17:59:57 crc kubenswrapper[4787]: I0126 17:59:57.467180 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gwtn" event={"ID":"8b3db753-c982-467b-b36a-c9d8a1e0d0a6","Type":"ContainerStarted","Data":"405b256ddb8990092f339f928dcba11c4edb86584de89e6985e62cccf0dd4df4"} Jan 26 17:59:57 crc kubenswrapper[4787]: I0126 17:59:57.474128 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-br6x5"] Jan 26 17:59:57 crc kubenswrapper[4787]: W0126 17:59:57.479111 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02a84a32_50ab_4e52_a923_62f079ecec13.slice/crio-14f778a97aa3bd485ac6ea5fe9e0515bd7f0cf98da8b7dee74f9375ea4a27c3a WatchSource:0}: Error finding container 14f778a97aa3bd485ac6ea5fe9e0515bd7f0cf98da8b7dee74f9375ea4a27c3a: Status 404 returned error can't find the container with id 14f778a97aa3bd485ac6ea5fe9e0515bd7f0cf98da8b7dee74f9375ea4a27c3a Jan 26 17:59:58 crc kubenswrapper[4787]: I0126 17:59:58.474099 4787 generic.go:334] "Generic (PLEG): container finished" podID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerID="bf388de7a1c6c30b6a8160b7348414d039d818fb5098345aaf3cf9fb5e79897e" exitCode=0 Jan 26 17:59:58 crc kubenswrapper[4787]: I0126 17:59:58.474277 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gwtn" event={"ID":"8b3db753-c982-467b-b36a-c9d8a1e0d0a6","Type":"ContainerDied","Data":"bf388de7a1c6c30b6a8160b7348414d039d818fb5098345aaf3cf9fb5e79897e"} Jan 26 17:59:58 crc kubenswrapper[4787]: I0126 17:59:58.476079 4787 generic.go:334] "Generic (PLEG): container finished" podID="02a84a32-50ab-4e52-a923-62f079ecec13" containerID="f49425525db862620ad59af0fc64f22a01bf24b2cb3750f27e79082b304f5db6" exitCode=0 Jan 26 17:59:58 crc kubenswrapper[4787]: I0126 17:59:58.476113 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br6x5" event={"ID":"02a84a32-50ab-4e52-a923-62f079ecec13","Type":"ContainerDied","Data":"f49425525db862620ad59af0fc64f22a01bf24b2cb3750f27e79082b304f5db6"} Jan 26 17:59:58 crc kubenswrapper[4787]: I0126 17:59:58.476138 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br6x5" event={"ID":"02a84a32-50ab-4e52-a923-62f079ecec13","Type":"ContainerStarted","Data":"14f778a97aa3bd485ac6ea5fe9e0515bd7f0cf98da8b7dee74f9375ea4a27c3a"} Jan 26 17:59:59 crc kubenswrapper[4787]: I0126 17:59:59.484829 4787 generic.go:334] "Generic (PLEG): container finished" podID="02a84a32-50ab-4e52-a923-62f079ecec13" containerID="61cf25cec3ace628ed559fdbb873ce7de95f3e7c582bd0c0b89a3d70af9aafb2" exitCode=0 Jan 26 17:59:59 crc kubenswrapper[4787]: I0126 17:59:59.484876 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br6x5" event={"ID":"02a84a32-50ab-4e52-a923-62f079ecec13","Type":"ContainerDied","Data":"61cf25cec3ace628ed559fdbb873ce7de95f3e7c582bd0c0b89a3d70af9aafb2"} Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.138201 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4"] Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.139456 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.142421 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.143445 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.156836 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4"] Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.185163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08c4cd17-321a-453c-8d73-d19dcb3695ac-secret-volume\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.185393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv449\" (UniqueName: \"kubernetes.io/projected/08c4cd17-321a-453c-8d73-d19dcb3695ac-kube-api-access-gv449\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.185446 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c4cd17-321a-453c-8d73-d19dcb3695ac-config-volume\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.286816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08c4cd17-321a-453c-8d73-d19dcb3695ac-secret-volume\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.286894 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv449\" (UniqueName: \"kubernetes.io/projected/08c4cd17-321a-453c-8d73-d19dcb3695ac-kube-api-access-gv449\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.286922 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c4cd17-321a-453c-8d73-d19dcb3695ac-config-volume\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.287898 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c4cd17-321a-453c-8d73-d19dcb3695ac-config-volume\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.303057 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08c4cd17-321a-453c-8d73-d19dcb3695ac-secret-volume\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.305515 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv449\" (UniqueName: \"kubernetes.io/projected/08c4cd17-321a-453c-8d73-d19dcb3695ac-kube-api-access-gv449\") pod \"collect-profiles-29490840-qqbk4\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.460478 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.540650 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br6x5" event={"ID":"02a84a32-50ab-4e52-a923-62f079ecec13","Type":"ContainerStarted","Data":"32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb"} Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.569518 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-br6x5" podStartSLOduration=3.117261272 podStartE2EDuration="4.569500623s" podCreationTimestamp="2026-01-26 17:59:56 +0000 UTC" firstStartedPulling="2026-01-26 17:59:58.477320347 +0000 UTC m=+967.184456480" lastFinishedPulling="2026-01-26 17:59:59.929559678 +0000 UTC m=+968.636695831" observedRunningTime="2026-01-26 18:00:00.568204214 +0000 UTC m=+969.275340347" watchObservedRunningTime="2026-01-26 18:00:00.569500623 +0000 UTC m=+969.276636756" Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.578313 4787 generic.go:334] "Generic (PLEG): container finished" podID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerID="cf54e1e7784652ae4f2205720aea24ea42c78e8f1cfe0e45aa19773a3da55583" exitCode=0 Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.578370 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gwtn" event={"ID":"8b3db753-c982-467b-b36a-c9d8a1e0d0a6","Type":"ContainerDied","Data":"cf54e1e7784652ae4f2205720aea24ea42c78e8f1cfe0e45aa19773a3da55583"} Jan 26 18:00:00 crc kubenswrapper[4787]: I0126 18:00:00.807485 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4"] Jan 26 18:00:00 crc kubenswrapper[4787]: W0126 18:00:00.823740 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08c4cd17_321a_453c_8d73_d19dcb3695ac.slice/crio-3b3576b5454b0568426025c778412872a2692017f2b426ce7d146de912e43280 WatchSource:0}: Error finding container 3b3576b5454b0568426025c778412872a2692017f2b426ce7d146de912e43280: Status 404 returned error can't find the container with id 3b3576b5454b0568426025c778412872a2692017f2b426ce7d146de912e43280 Jan 26 18:00:01 crc kubenswrapper[4787]: I0126 18:00:01.597186 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" event={"ID":"08c4cd17-321a-453c-8d73-d19dcb3695ac","Type":"ContainerStarted","Data":"3b3576b5454b0568426025c778412872a2692017f2b426ce7d146de912e43280"} Jan 26 18:00:02 crc kubenswrapper[4787]: I0126 18:00:02.602309 4787 generic.go:334] "Generic (PLEG): container finished" podID="08c4cd17-321a-453c-8d73-d19dcb3695ac" containerID="9c5090c840ffa203df1a3828d548fd0ca7079c308e055bd1609a0b68786e5c07" exitCode=0 Jan 26 18:00:02 crc kubenswrapper[4787]: I0126 18:00:02.602380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" event={"ID":"08c4cd17-321a-453c-8d73-d19dcb3695ac","Type":"ContainerDied","Data":"9c5090c840ffa203df1a3828d548fd0ca7079c308e055bd1609a0b68786e5c07"} Jan 26 18:00:02 crc kubenswrapper[4787]: I0126 18:00:02.605382 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gwtn" event={"ID":"8b3db753-c982-467b-b36a-c9d8a1e0d0a6","Type":"ContainerStarted","Data":"89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71"} Jan 26 18:00:02 crc kubenswrapper[4787]: I0126 18:00:02.654300 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9gwtn" podStartSLOduration=3.196485931 podStartE2EDuration="6.654279994s" podCreationTimestamp="2026-01-26 17:59:56 +0000 UTC" firstStartedPulling="2026-01-26 17:59:58.475318921 +0000 UTC m=+967.182455054" lastFinishedPulling="2026-01-26 18:00:01.933112984 +0000 UTC m=+970.640249117" observedRunningTime="2026-01-26 18:00:02.649265381 +0000 UTC m=+971.356401514" watchObservedRunningTime="2026-01-26 18:00:02.654279994 +0000 UTC m=+971.361416127" Jan 26 18:00:02 crc kubenswrapper[4787]: I0126 18:00:02.863573 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzdpn"] Jan 26 18:00:02 crc kubenswrapper[4787]: I0126 18:00:02.863836 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pzdpn" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="registry-server" containerID="cri-o://999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245" gracePeriod=2 Jan 26 18:00:03 crc kubenswrapper[4787]: E0126 18:00:03.153043 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe60505_e284_4dc0_a89a_523a953f9c26.slice/crio-999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe60505_e284_4dc0_a89a_523a953f9c26.slice/crio-conmon-999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.617436 4787 generic.go:334] "Generic (PLEG): container finished" podID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerID="999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245" exitCode=0 Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.618312 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerDied","Data":"999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245"} Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.709367 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.718150 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.722409 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-qh4vj" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.726010 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.726816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.728848 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ws2xp" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.737970 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.739996 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bf7x\" (UniqueName: \"kubernetes.io/projected/a72edcac-00ca-46a7-ab30-551c750eb2cd-kube-api-access-8bf7x\") pod \"barbican-operator-controller-manager-7f86f8796f-w7bvz\" (UID: \"a72edcac-00ca-46a7-ab30-551c750eb2cd\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.740073 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjck\" (UniqueName: \"kubernetes.io/projected/3923acf7-e06a-4351-84aa-7def61b4ca71-kube-api-access-wtjck\") pod \"cinder-operator-controller-manager-7478f7dbf9-ggtn5\" (UID: \"3923acf7-e06a-4351-84aa-7def61b4ca71\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.754861 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.778034 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.779341 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.782982 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hghkc" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.808876 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.809669 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.811092 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-dpchk" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.830926 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.837617 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.845919 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.846184 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtjck\" (UniqueName: \"kubernetes.io/projected/3923acf7-e06a-4351-84aa-7def61b4ca71-kube-api-access-wtjck\") pod \"cinder-operator-controller-manager-7478f7dbf9-ggtn5\" (UID: \"3923acf7-e06a-4351-84aa-7def61b4ca71\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.846296 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bf7x\" (UniqueName: \"kubernetes.io/projected/a72edcac-00ca-46a7-ab30-551c750eb2cd-kube-api-access-8bf7x\") pod \"barbican-operator-controller-manager-7f86f8796f-w7bvz\" (UID: \"a72edcac-00ca-46a7-ab30-551c750eb2cd\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.846849 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.853717 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-p9bcv" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.864378 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf"] Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.879555 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.904265 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtjck\" (UniqueName: \"kubernetes.io/projected/3923acf7-e06a-4351-84aa-7def61b4ca71-kube-api-access-wtjck\") pod \"cinder-operator-controller-manager-7478f7dbf9-ggtn5\" (UID: \"3923acf7-e06a-4351-84aa-7def61b4ca71\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.911971 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bf7x\" (UniqueName: \"kubernetes.io/projected/a72edcac-00ca-46a7-ab30-551c750eb2cd-kube-api-access-8bf7x\") pod \"barbican-operator-controller-manager-7f86f8796f-w7bvz\" (UID: \"a72edcac-00ca-46a7-ab30-551c750eb2cd\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.927998 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk"] Jan 26 18:00:03 crc kubenswrapper[4787]: E0126 18:00:03.928277 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="extract-content" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.928288 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="extract-content" Jan 26 18:00:03 crc kubenswrapper[4787]: E0126 18:00:03.928300 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="extract-utilities" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.928306 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="extract-utilities" Jan 26 18:00:03 crc kubenswrapper[4787]: E0126 18:00:03.928328 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="registry-server" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.928335 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="registry-server" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.928469 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" containerName="registry-server" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.928881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.938084 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tlfbz" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950276 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-catalog-content\") pod \"8fe60505-e284-4dc0-a89a-523a953f9c26\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsn7c\" (UniqueName: \"kubernetes.io/projected/8fe60505-e284-4dc0-a89a-523a953f9c26-kube-api-access-lsn7c\") pod \"8fe60505-e284-4dc0-a89a-523a953f9c26\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950440 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-utilities\") pod \"8fe60505-e284-4dc0-a89a-523a953f9c26\" (UID: \"8fe60505-e284-4dc0-a89a-523a953f9c26\") " Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950644 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568ng\" (UniqueName: \"kubernetes.io/projected/a42402ec-c6f5-4be4-b649-1bfb41ebf1b0-kube-api-access-568ng\") pod \"horizon-operator-controller-manager-77d5c5b54f-twprk\" (UID: \"a42402ec-c6f5-4be4-b649-1bfb41ebf1b0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950692 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n828r\" (UniqueName: \"kubernetes.io/projected/62502932-23fe-4a77-a89a-26fd15f0f44f-kube-api-access-n828r\") pod \"heat-operator-controller-manager-594c8c9d5d-nvpxf\" (UID: \"62502932-23fe-4a77-a89a-26fd15f0f44f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfszc\" (UniqueName: \"kubernetes.io/projected/6e4131c6-1507-4b15-92b7-29a2fe7f3775-kube-api-access-gfszc\") pod \"designate-operator-controller-manager-b45d7bf98-9n2vm\" (UID: \"6e4131c6-1507-4b15-92b7-29a2fe7f3775\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.950803 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmx8\" (UniqueName: \"kubernetes.io/projected/c5f0e34d-3e5d-458a-a560-08769cb30849-kube-api-access-jtmx8\") pod \"glance-operator-controller-manager-78fdd796fd-glflp\" (UID: \"c5f0e34d-3e5d-458a-a560-08769cb30849\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.951647 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-utilities" (OuterVolumeSpecName: "utilities") pod "8fe60505-e284-4dc0-a89a-523a953f9c26" (UID: "8fe60505-e284-4dc0-a89a-523a953f9c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.956125 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe60505-e284-4dc0-a89a-523a953f9c26-kube-api-access-lsn7c" (OuterVolumeSpecName: "kube-api-access-lsn7c") pod "8fe60505-e284-4dc0-a89a-523a953f9c26" (UID: "8fe60505-e284-4dc0-a89a-523a953f9c26"). InnerVolumeSpecName "kube-api-access-lsn7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:03 crc kubenswrapper[4787]: I0126 18:00:03.997783 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.006039 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.007517 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.009364 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.009542 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-82d6v" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.036026 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.040625 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.041536 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.043729 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zpspj" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054100 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n828r\" (UniqueName: \"kubernetes.io/projected/62502932-23fe-4a77-a89a-26fd15f0f44f-kube-api-access-n828r\") pod \"heat-operator-controller-manager-594c8c9d5d-nvpxf\" (UID: \"62502932-23fe-4a77-a89a-26fd15f0f44f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054160 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfszc\" (UniqueName: \"kubernetes.io/projected/6e4131c6-1507-4b15-92b7-29a2fe7f3775-kube-api-access-gfszc\") pod \"designate-operator-controller-manager-b45d7bf98-9n2vm\" (UID: \"6e4131c6-1507-4b15-92b7-29a2fe7f3775\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054183 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmx8\" (UniqueName: \"kubernetes.io/projected/c5f0e34d-3e5d-458a-a560-08769cb30849-kube-api-access-jtmx8\") pod \"glance-operator-controller-manager-78fdd796fd-glflp\" (UID: \"c5f0e34d-3e5d-458a-a560-08769cb30849\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054210 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dts9m\" (UniqueName: \"kubernetes.io/projected/01d314a4-2f86-4cc3-ac94-7a09b363a05d-kube-api-access-dts9m\") pod \"keystone-operator-controller-manager-b8b6d4659-bswjx\" (UID: \"01d314a4-2f86-4cc3-ac94-7a09b363a05d\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054248 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054264 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdndw\" (UniqueName: \"kubernetes.io/projected/8fe5e013-7524-461a-9fae-0867594144d5-kube-api-access-cdndw\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-568ng\" (UniqueName: \"kubernetes.io/projected/a42402ec-c6f5-4be4-b649-1bfb41ebf1b0-kube-api-access-568ng\") pod \"horizon-operator-controller-manager-77d5c5b54f-twprk\" (UID: \"a42402ec-c6f5-4be4-b649-1bfb41ebf1b0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054326 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsn7c\" (UniqueName: \"kubernetes.io/projected/8fe60505-e284-4dc0-a89a-523a953f9c26-kube-api-access-lsn7c\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.054338 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.059959 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fe60505-e284-4dc0-a89a-523a953f9c26" (UID: "8fe60505-e284-4dc0-a89a-523a953f9c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.091006 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.091621 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfszc\" (UniqueName: \"kubernetes.io/projected/6e4131c6-1507-4b15-92b7-29a2fe7f3775-kube-api-access-gfszc\") pod \"designate-operator-controller-manager-b45d7bf98-9n2vm\" (UID: \"6e4131c6-1507-4b15-92b7-29a2fe7f3775\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.091845 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmx8\" (UniqueName: \"kubernetes.io/projected/c5f0e34d-3e5d-458a-a560-08769cb30849-kube-api-access-jtmx8\") pod \"glance-operator-controller-manager-78fdd796fd-glflp\" (UID: \"c5f0e34d-3e5d-458a-a560-08769cb30849\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.092020 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.093840 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-568ng\" (UniqueName: \"kubernetes.io/projected/a42402ec-c6f5-4be4-b649-1bfb41ebf1b0-kube-api-access-568ng\") pod \"horizon-operator-controller-manager-77d5c5b54f-twprk\" (UID: \"a42402ec-c6f5-4be4-b649-1bfb41ebf1b0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.094220 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-sf8hl" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.107261 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.117550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n828r\" (UniqueName: \"kubernetes.io/projected/62502932-23fe-4a77-a89a-26fd15f0f44f-kube-api-access-n828r\") pod \"heat-operator-controller-manager-594c8c9d5d-nvpxf\" (UID: \"62502932-23fe-4a77-a89a-26fd15f0f44f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.118299 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.119145 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.121157 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dmm8p" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.125996 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.132475 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.133343 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.143089 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2h7g2" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.143249 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.152886 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.158550 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.165197 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.165965 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dts9m\" (UniqueName: \"kubernetes.io/projected/01d314a4-2f86-4cc3-ac94-7a09b363a05d-kube-api-access-dts9m\") pod \"keystone-operator-controller-manager-b8b6d4659-bswjx\" (UID: \"01d314a4-2f86-4cc3-ac94-7a09b363a05d\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.166134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.166171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdndw\" (UniqueName: \"kubernetes.io/projected/8fe5e013-7524-461a-9fae-0867594144d5-kube-api-access-cdndw\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.168390 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.168435 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe60505-e284-4dc0-a89a-523a953f9c26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.168531 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert podName:8fe5e013-7524-461a-9fae-0867594144d5 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:04.668514098 +0000 UTC m=+973.375650231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert") pod "infra-operator-controller-manager-694cf4f878-m9g4s" (UID: "8fe5e013-7524-461a-9fae-0867594144d5") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.174701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.208367 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm"] Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.209448 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c4cd17-321a-453c-8d73-d19dcb3695ac" containerName="collect-profiles" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.209463 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c4cd17-321a-453c-8d73-d19dcb3695ac" containerName="collect-profiles" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.209590 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c4cd17-321a-453c-8d73-d19dcb3695ac" containerName="collect-profiles" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.210626 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.216650 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2wbr9" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.218607 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdndw\" (UniqueName: \"kubernetes.io/projected/8fe5e013-7524-461a-9fae-0867594144d5-kube-api-access-cdndw\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.223382 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.224569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.228078 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dts9m\" (UniqueName: \"kubernetes.io/projected/01d314a4-2f86-4cc3-ac94-7a09b363a05d-kube-api-access-dts9m\") pod \"keystone-operator-controller-manager-b8b6d4659-bswjx\" (UID: \"01d314a4-2f86-4cc3-ac94-7a09b363a05d\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.228155 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.229147 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.232783 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ds6z4" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.237096 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.238970 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mq7vv" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.244119 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.244613 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.250429 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.264794 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.265667 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.274253 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.275690 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.276474 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c4cd17-321a-453c-8d73-d19dcb3695ac-config-volume\") pod \"08c4cd17-321a-453c-8d73-d19dcb3695ac\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.276515 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08c4cd17-321a-453c-8d73-d19dcb3695ac-secret-volume\") pod \"08c4cd17-321a-453c-8d73-d19dcb3695ac\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.276547 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv449\" (UniqueName: \"kubernetes.io/projected/08c4cd17-321a-453c-8d73-d19dcb3695ac-kube-api-access-gv449\") pod \"08c4cd17-321a-453c-8d73-d19dcb3695ac\" (UID: \"08c4cd17-321a-453c-8d73-d19dcb3695ac\") " Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.276668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6w56\" (UniqueName: \"kubernetes.io/projected/87ebc1f2-80bd-46db-a605-c3667e656f5b-kube-api-access-h6w56\") pod \"ironic-operator-controller-manager-598f7747c9-gfrbb\" (UID: \"87ebc1f2-80bd-46db-a605-c3667e656f5b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.276744 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67h55\" (UniqueName: \"kubernetes.io/projected/b0561e24-5692-4790-9cbd-3a74a8c3ce69-kube-api-access-67h55\") pod \"manila-operator-controller-manager-78c6999f6f-tcj6b\" (UID: \"b0561e24-5692-4790-9cbd-3a74a8c3ce69\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.276796 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vltdb\" (UniqueName: \"kubernetes.io/projected/52a2ecbf-eca4-447b-a516-e8e71194c5ff-kube-api-access-vltdb\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8\" (UID: \"52a2ecbf-eca4-447b-a516-e8e71194c5ff\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.278251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c4cd17-321a-453c-8d73-d19dcb3695ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "08c4cd17-321a-453c-8d73-d19dcb3695ac" (UID: "08c4cd17-321a-453c-8d73-d19dcb3695ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.287676 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.288488 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c4cd17-321a-453c-8d73-d19dcb3695ac-kube-api-access-gv449" (OuterVolumeSpecName: "kube-api-access-gv449") pod "08c4cd17-321a-453c-8d73-d19dcb3695ac" (UID: "08c4cd17-321a-453c-8d73-d19dcb3695ac"). InnerVolumeSpecName "kube-api-access-gv449". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.290060 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.290528 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sngpq" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.290681 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nnqdc" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.292190 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c4cd17-321a-453c-8d73-d19dcb3695ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08c4cd17-321a-453c-8d73-d19dcb3695ac" (UID: "08c4cd17-321a-453c-8d73-d19dcb3695ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.295651 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.299043 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.304175 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-t8mfd" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.308475 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.311255 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.316616 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.322713 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.323555 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.323876 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.325347 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-922q4" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.333407 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.335313 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.337671 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qmlqh" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.357515 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.366772 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377527 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggg4b\" (UniqueName: \"kubernetes.io/projected/3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59-kube-api-access-ggg4b\") pod \"ovn-operator-controller-manager-6f75f45d54-trx4l\" (UID: \"3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67h55\" (UniqueName: \"kubernetes.io/projected/b0561e24-5692-4790-9cbd-3a74a8c3ce69-kube-api-access-67h55\") pod \"manila-operator-controller-manager-78c6999f6f-tcj6b\" (UID: \"b0561e24-5692-4790-9cbd-3a74a8c3ce69\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377592 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfj9t\" (UniqueName: \"kubernetes.io/projected/70abdb24-0f0e-477a-8c22-7a01f73c05f2-kube-api-access-lfj9t\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377656 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2hc\" (UniqueName: \"kubernetes.io/projected/ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7-kube-api-access-zx2hc\") pod \"nova-operator-controller-manager-7bdb645866-zhtc5\" (UID: \"ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377705 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vltdb\" (UniqueName: \"kubernetes.io/projected/52a2ecbf-eca4-447b-a516-e8e71194c5ff-kube-api-access-vltdb\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8\" (UID: \"52a2ecbf-eca4-447b-a516-e8e71194c5ff\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377748 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6w56\" (UniqueName: \"kubernetes.io/projected/87ebc1f2-80bd-46db-a605-c3667e656f5b-kube-api-access-h6w56\") pod \"ironic-operator-controller-manager-598f7747c9-gfrbb\" (UID: \"87ebc1f2-80bd-46db-a605-c3667e656f5b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377776 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwq2\" (UniqueName: \"kubernetes.io/projected/8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3-kube-api-access-lkwq2\") pod \"octavia-operator-controller-manager-5f4cd88d46-vqc5p\" (UID: \"8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377801 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lg4\" (UniqueName: \"kubernetes.io/projected/c3415733-55e0-4c4f-8bb6-0663ddf67633-kube-api-access-68lg4\") pod \"neutron-operator-controller-manager-78d58447c5-4v6fm\" (UID: \"c3415733-55e0-4c4f-8bb6-0663ddf67633\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377843 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c4cd17-321a-453c-8d73-d19dcb3695ac-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377853 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08c4cd17-321a-453c-8d73-d19dcb3695ac-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.377861 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv449\" (UniqueName: \"kubernetes.io/projected/08c4cd17-321a-453c-8d73-d19dcb3695ac-kube-api-access-gv449\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.393313 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.395844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6w56\" (UniqueName: \"kubernetes.io/projected/87ebc1f2-80bd-46db-a605-c3667e656f5b-kube-api-access-h6w56\") pod \"ironic-operator-controller-manager-598f7747c9-gfrbb\" (UID: \"87ebc1f2-80bd-46db-a605-c3667e656f5b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.424712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vltdb\" (UniqueName: \"kubernetes.io/projected/52a2ecbf-eca4-447b-a516-e8e71194c5ff-kube-api-access-vltdb\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8\" (UID: \"52a2ecbf-eca4-447b-a516-e8e71194c5ff\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.428145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67h55\" (UniqueName: \"kubernetes.io/projected/b0561e24-5692-4790-9cbd-3a74a8c3ce69-kube-api-access-67h55\") pod \"manila-operator-controller-manager-78c6999f6f-tcj6b\" (UID: \"b0561e24-5692-4790-9cbd-3a74a8c3ce69\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.442039 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.443208 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.454314 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9f8cf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.454904 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.475413 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479139 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwq2\" (UniqueName: \"kubernetes.io/projected/8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3-kube-api-access-lkwq2\") pod \"octavia-operator-controller-manager-5f4cd88d46-vqc5p\" (UID: \"8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479277 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lg4\" (UniqueName: \"kubernetes.io/projected/c3415733-55e0-4c4f-8bb6-0663ddf67633-kube-api-access-68lg4\") pod \"neutron-operator-controller-manager-78d58447c5-4v6fm\" (UID: \"c3415733-55e0-4c4f-8bb6-0663ddf67633\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479387 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggg4b\" (UniqueName: \"kubernetes.io/projected/3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59-kube-api-access-ggg4b\") pod \"ovn-operator-controller-manager-6f75f45d54-trx4l\" (UID: \"3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479478 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfj9t\" (UniqueName: \"kubernetes.io/projected/70abdb24-0f0e-477a-8c22-7a01f73c05f2-kube-api-access-lfj9t\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479615 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65lc\" (UniqueName: \"kubernetes.io/projected/4174fb8a-905f-4d3a-9dbc-5212b68319f2-kube-api-access-m65lc\") pod \"swift-operator-controller-manager-547cbdb99f-6k5bf\" (UID: \"4174fb8a-905f-4d3a-9dbc-5212b68319f2\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.479860 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx2hc\" (UniqueName: \"kubernetes.io/projected/ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7-kube-api-access-zx2hc\") pod \"nova-operator-controller-manager-7bdb645866-zhtc5\" (UID: \"ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.480045 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2qh\" (UniqueName: \"kubernetes.io/projected/483c1dd7-425f-43b5-a848-efdc2d9899d0-kube-api-access-pt2qh\") pod \"placement-operator-controller-manager-79d5ccc684-4wd75\" (UID: \"483c1dd7-425f-43b5-a848-efdc2d9899d0\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.480119 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.480218 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert podName:70abdb24-0f0e-477a-8c22-7a01f73c05f2 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:04.980176239 +0000 UTC m=+973.687312432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert") pod "openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" (UID: "70abdb24-0f0e-477a-8c22-7a01f73c05f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.480310 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkztm\" (UniqueName: \"kubernetes.io/projected/2d7744e1-c01e-4dd9-87f2-7aa6695c2d60-kube-api-access-rkztm\") pod \"telemetry-operator-controller-manager-85cd9769bb-x8ql6\" (UID: \"2d7744e1-c01e-4dd9-87f2-7aa6695c2d60\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.484928 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.518355 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx2hc\" (UniqueName: \"kubernetes.io/projected/ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7-kube-api-access-zx2hc\") pod \"nova-operator-controller-manager-7bdb645866-zhtc5\" (UID: \"ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.526137 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggg4b\" (UniqueName: \"kubernetes.io/projected/3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59-kube-api-access-ggg4b\") pod \"ovn-operator-controller-manager-6f75f45d54-trx4l\" (UID: \"3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.532399 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.540916 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lg4\" (UniqueName: \"kubernetes.io/projected/c3415733-55e0-4c4f-8bb6-0663ddf67633-kube-api-access-68lg4\") pod \"neutron-operator-controller-manager-78d58447c5-4v6fm\" (UID: \"c3415733-55e0-4c4f-8bb6-0663ddf67633\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.549452 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfj9t\" (UniqueName: \"kubernetes.io/projected/70abdb24-0f0e-477a-8c22-7a01f73c05f2-kube-api-access-lfj9t\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.550240 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.567427 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.571844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwq2\" (UniqueName: \"kubernetes.io/projected/8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3-kube-api-access-lkwq2\") pod \"octavia-operator-controller-manager-5f4cd88d46-vqc5p\" (UID: \"8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.587058 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65lc\" (UniqueName: \"kubernetes.io/projected/4174fb8a-905f-4d3a-9dbc-5212b68319f2-kube-api-access-m65lc\") pod \"swift-operator-controller-manager-547cbdb99f-6k5bf\" (UID: \"4174fb8a-905f-4d3a-9dbc-5212b68319f2\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.587103 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2qh\" (UniqueName: \"kubernetes.io/projected/483c1dd7-425f-43b5-a848-efdc2d9899d0-kube-api-access-pt2qh\") pod \"placement-operator-controller-manager-79d5ccc684-4wd75\" (UID: \"483c1dd7-425f-43b5-a848-efdc2d9899d0\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.587161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmzv\" (UniqueName: \"kubernetes.io/projected/5fd3204c-f4d7-466e-94b4-8463575086be-kube-api-access-kwmzv\") pod \"test-operator-controller-manager-69797bbcbd-clf9c\" (UID: \"5fd3204c-f4d7-466e-94b4-8463575086be\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.587193 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkztm\" (UniqueName: \"kubernetes.io/projected/2d7744e1-c01e-4dd9-87f2-7aa6695c2d60-kube-api-access-rkztm\") pod \"telemetry-operator-controller-manager-85cd9769bb-x8ql6\" (UID: \"2d7744e1-c01e-4dd9-87f2-7aa6695c2d60\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.605302 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.612715 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkztm\" (UniqueName: \"kubernetes.io/projected/2d7744e1-c01e-4dd9-87f2-7aa6695c2d60-kube-api-access-rkztm\") pod \"telemetry-operator-controller-manager-85cd9769bb-x8ql6\" (UID: \"2d7744e1-c01e-4dd9-87f2-7aa6695c2d60\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.634328 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65lc\" (UniqueName: \"kubernetes.io/projected/4174fb8a-905f-4d3a-9dbc-5212b68319f2-kube-api-access-m65lc\") pod \"swift-operator-controller-manager-547cbdb99f-6k5bf\" (UID: \"4174fb8a-905f-4d3a-9dbc-5212b68319f2\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.634855 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2qh\" (UniqueName: \"kubernetes.io/projected/483c1dd7-425f-43b5-a848-efdc2d9899d0-kube-api-access-pt2qh\") pod \"placement-operator-controller-manager-79d5ccc684-4wd75\" (UID: \"483c1dd7-425f-43b5-a848-efdc2d9899d0\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.689021 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-fmkqf"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.695067 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.695134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmzv\" (UniqueName: \"kubernetes.io/projected/5fd3204c-f4d7-466e-94b4-8463575086be-kube-api-access-kwmzv\") pod \"test-operator-controller-manager-69797bbcbd-clf9c\" (UID: \"5fd3204c-f4d7-466e-94b4-8463575086be\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.695284 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:04 crc kubenswrapper[4787]: E0126 18:00:04.695337 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert podName:8fe5e013-7524-461a-9fae-0867594144d5 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:05.695321612 +0000 UTC m=+974.402457745 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert") pod "infra-operator-controller-manager-694cf4f878-m9g4s" (UID: "8fe5e013-7524-461a-9fae-0867594144d5") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.701082 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.701780 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.746341 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.754372 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.756252 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzdpn" event={"ID":"8fe60505-e284-4dc0-a89a-523a953f9c26","Type":"ContainerDied","Data":"f46001f34f425a7550b2cf03812a02b2a9e0c9b9616f027b06f5b47acd8e67aa"} Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.756294 4787 scope.go:117] "RemoveContainer" containerID="999eeaa2c582aa7ed76c57bad362d3e36a1183a60a90c91ea8a366263e9a6245" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.756461 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzdpn" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.772672 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-n56s5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.774738 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmzv\" (UniqueName: \"kubernetes.io/projected/5fd3204c-f4d7-466e-94b4-8463575086be-kube-api-access-kwmzv\") pod \"test-operator-controller-manager-69797bbcbd-clf9c\" (UID: \"5fd3204c-f4d7-466e-94b4-8463575086be\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.777808 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.802610 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.806390 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-fmkqf"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.807797 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" event={"ID":"08c4cd17-321a-453c-8d73-d19dcb3695ac","Type":"ContainerDied","Data":"3b3576b5454b0568426025c778412872a2692017f2b426ce7d146de912e43280"} Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.807853 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3576b5454b0568426025c778412872a2692017f2b426ce7d146de912e43280" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.807972 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.879062 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.907836 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq82t\" (UniqueName: \"kubernetes.io/projected/302ec2db-963c-44d7-941e-51471e7ae3bb-kube-api-access-dq82t\") pod \"watcher-operator-controller-manager-564965969-fmkqf\" (UID: \"302ec2db-963c-44d7-941e-51471e7ae3bb\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.911466 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.912428 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.915050 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.915940 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zbqkj" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.916160 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.923635 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.929737 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.930852 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.932857 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5qhx6" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.938851 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.943393 4787 scope.go:117] "RemoveContainer" containerID="71c500d3ef0db98b10d32906195dd0da47e23acab2836cd8cf14e334e020f2d5" Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.952066 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.957524 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.964188 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzdpn"] Jan 26 18:00:04 crc kubenswrapper[4787]: I0126 18:00:04.969095 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pzdpn"] Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.009598 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.009938 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq82t\" (UniqueName: \"kubernetes.io/projected/302ec2db-963c-44d7-941e-51471e7ae3bb-kube-api-access-dq82t\") pod \"watcher-operator-controller-manager-564965969-fmkqf\" (UID: \"302ec2db-963c-44d7-941e-51471e7ae3bb\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.010374 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.010412 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert podName:70abdb24-0f0e-477a-8c22-7a01f73c05f2 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:06.01039948 +0000 UTC m=+974.717535613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert") pod "openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" (UID: "70abdb24-0f0e-477a-8c22-7a01f73c05f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.041748 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq82t\" (UniqueName: \"kubernetes.io/projected/302ec2db-963c-44d7-941e-51471e7ae3bb-kube-api-access-dq82t\") pod \"watcher-operator-controller-manager-564965969-fmkqf\" (UID: \"302ec2db-963c-44d7-941e-51471e7ae3bb\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.051637 4787 scope.go:117] "RemoveContainer" containerID="fdf2ef8cf19f14b34369348e5cc4cb68fec6780e4a34a598e294b2eadfe04de4" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.091191 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.113650 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2b9\" (UniqueName: \"kubernetes.io/projected/b2e37e1d-9342-4006-9626-273178d301b0-kube-api-access-js2b9\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.113695 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.113735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrsx\" (UniqueName: \"kubernetes.io/projected/77de585e-c649-4a8e-82e5-fea5379cac6d-kube-api-access-tcrsx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-n5tjf\" (UID: \"77de585e-c649-4a8e-82e5-fea5379cac6d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.113802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.118816 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5"] Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.214822 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrsx\" (UniqueName: \"kubernetes.io/projected/77de585e-c649-4a8e-82e5-fea5379cac6d-kube-api-access-tcrsx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-n5tjf\" (UID: \"77de585e-c649-4a8e-82e5-fea5379cac6d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.214921 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.218225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js2b9\" (UniqueName: \"kubernetes.io/projected/b2e37e1d-9342-4006-9626-273178d301b0-kube-api-access-js2b9\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.218270 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.218509 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.218566 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:05.718547885 +0000 UTC m=+974.425684018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.219016 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.219049 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:05.719039326 +0000 UTC m=+974.426175459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "metrics-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.244510 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrsx\" (UniqueName: \"kubernetes.io/projected/77de585e-c649-4a8e-82e5-fea5379cac6d-kube-api-access-tcrsx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-n5tjf\" (UID: \"77de585e-c649-4a8e-82e5-fea5379cac6d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.248637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js2b9\" (UniqueName: \"kubernetes.io/projected/b2e37e1d-9342-4006-9626-273178d301b0-kube-api-access-js2b9\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.347070 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.603540 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe60505-e284-4dc0-a89a-523a953f9c26" path="/var/lib/kubelet/pods/8fe60505-e284-4dc0-a89a-523a953f9c26/volumes" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.604574 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk"] Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.604598 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf"] Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.610271 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp"] Jan 26 18:00:05 crc kubenswrapper[4787]: W0126 18:00:05.611878 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda42402ec_c6f5_4be4_b649_1bfb41ebf1b0.slice/crio-1d58964c2d661e0fe36efa2d0468486035827ffffd55b3247c44b2df255f4e1c WatchSource:0}: Error finding container 1d58964c2d661e0fe36efa2d0468486035827ffffd55b3247c44b2df255f4e1c: Status 404 returned error can't find the container with id 1d58964c2d661e0fe36efa2d0468486035827ffffd55b3247c44b2df255f4e1c Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.700944 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx"] Jan 26 18:00:05 crc kubenswrapper[4787]: W0126 18:00:05.702301 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d314a4_2f86_4cc3_ac94_7a09b363a05d.slice/crio-0ea9c3d075a619639e825f3c13fd97aa37adba75ff30bcb10a4afefc917b517d WatchSource:0}: Error finding container 0ea9c3d075a619639e825f3c13fd97aa37adba75ff30bcb10a4afefc917b517d: Status 404 returned error can't find the container with id 0ea9c3d075a619639e825f3c13fd97aa37adba75ff30bcb10a4afefc917b517d Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.729993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.730159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.730181 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.730212 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.730251 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:06.730234889 +0000 UTC m=+975.437371022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "metrics-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.730377 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.730430 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.730456 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:06.730434433 +0000 UTC m=+975.437570666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: E0126 18:00:05.730488 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert podName:8fe5e013-7524-461a-9fae-0867594144d5 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:07.730469854 +0000 UTC m=+976.437605997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert") pod "infra-operator-controller-manager-694cf4f878-m9g4s" (UID: "8fe5e013-7524-461a-9fae-0867594144d5") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.820561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" event={"ID":"a42402ec-c6f5-4be4-b649-1bfb41ebf1b0","Type":"ContainerStarted","Data":"1d58964c2d661e0fe36efa2d0468486035827ffffd55b3247c44b2df255f4e1c"} Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.826364 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" event={"ID":"6e4131c6-1507-4b15-92b7-29a2fe7f3775","Type":"ContainerStarted","Data":"0ecfe29ec3da4939265c8d0e0bab6eb63b8d5a17a9de3b43dd79674cc7dd6dcb"} Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.829541 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" event={"ID":"3923acf7-e06a-4351-84aa-7def61b4ca71","Type":"ContainerStarted","Data":"5d7b77e45b0b76cf21b164c838a098d3749957f6b5e38548fcee9403ca4f7e70"} Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.830892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" event={"ID":"01d314a4-2f86-4cc3-ac94-7a09b363a05d","Type":"ContainerStarted","Data":"0ea9c3d075a619639e825f3c13fd97aa37adba75ff30bcb10a4afefc917b517d"} Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.835193 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" event={"ID":"62502932-23fe-4a77-a89a-26fd15f0f44f","Type":"ContainerStarted","Data":"50e0e754dacdd97b06e1290920619164b7a5db8d7197e03f3105277b8844ab7a"} Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.836510 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" event={"ID":"c5f0e34d-3e5d-458a-a560-08769cb30849","Type":"ContainerStarted","Data":"79b2f000f28bbb2bee592b3d0f42f06958238f16273d8de9f5c00566fd035c17"} Jan 26 18:00:05 crc kubenswrapper[4787]: I0126 18:00:05.837354 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" event={"ID":"a72edcac-00ca-46a7-ab30-551c750eb2cd","Type":"ContainerStarted","Data":"0272d6cc89b37cbe8cd9466cc125ae47d5e904b829354feb4e45c9c26e976e5b"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.034450 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.034624 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.034670 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert podName:70abdb24-0f0e-477a-8c22-7a01f73c05f2 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:08.034656747 +0000 UTC m=+976.741792870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert") pod "openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" (UID: "70abdb24-0f0e-477a-8c22-7a01f73c05f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.074035 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.085011 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.096725 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.114333 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-fmkqf"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.144407 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l"] Jan 26 18:00:06 crc kubenswrapper[4787]: W0126 18:00:06.147833 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d7744e1_c01e_4dd9_87f2_7aa6695c2d60.slice/crio-d6170cea53f75ab5ccf9e3bf18af5f6f3e34ecf99c16f6238440f8e87f48752e WatchSource:0}: Error finding container d6170cea53f75ab5ccf9e3bf18af5f6f3e34ecf99c16f6238440f8e87f48752e: Status 404 returned error can't find the container with id d6170cea53f75ab5ccf9e3bf18af5f6f3e34ecf99c16f6238440f8e87f48752e Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.164681 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75"] Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.193994 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkwq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-vqc5p_openstack-operators(8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.195422 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" podUID="8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.197268 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6"] Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.198389 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68lg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-4v6fm_openstack-operators(c3415733-55e0-4c4f-8bb6-0663ddf67633): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.200204 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcrsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-n5tjf_openstack-operators(77de585e-c649-4a8e-82e5-fea5379cac6d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.200870 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwmzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-clf9c_openstack-operators(5fd3204c-f4d7-466e-94b4-8463575086be): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.201159 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m65lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-6k5bf_openstack-operators(4174fb8a-905f-4d3a-9dbc-5212b68319f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.201321 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" podUID="c3415733-55e0-4c4f-8bb6-0663ddf67633" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.201426 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" podUID="77de585e-c649-4a8e-82e5-fea5379cac6d" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.204062 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" podUID="4174fb8a-905f-4d3a-9dbc-5212b68319f2" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.204085 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" podUID="5fd3204c-f4d7-466e-94b4-8463575086be" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.230469 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.242889 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.250390 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.256427 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.261277 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.265633 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf"] Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.588152 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.588268 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.645811 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.743375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.743503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.743593 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.743683 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:08.743663914 +0000 UTC m=+977.450800047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "webhook-server-cert" not found Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.743694 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.743749 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:08.743732196 +0000 UTC m=+977.450868439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "metrics-server-cert" not found Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.856287 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" event={"ID":"87ebc1f2-80bd-46db-a605-c3667e656f5b","Type":"ContainerStarted","Data":"fd80226376f4a18742211dca08c9468a12ebd905538382fd1ca61d7aefc2bb2c"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.857593 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" event={"ID":"b0561e24-5692-4790-9cbd-3a74a8c3ce69","Type":"ContainerStarted","Data":"e3e1865888a6e64ca9db43b6ed5e6ee372cf5551b9c9b3dd9ca8a938ee1cd001"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.859174 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" event={"ID":"302ec2db-963c-44d7-941e-51471e7ae3bb","Type":"ContainerStarted","Data":"90a4991126603a24d8562edfe01d6a3e0a0dd19982766e79f6211253006b621b"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.860341 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" event={"ID":"8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3","Type":"ContainerStarted","Data":"7b7287c0649c6030611847b6dc20503df7b9ef1912738716bfb0c82854cdc2b9"} Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.862716 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" podUID="8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.862725 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" event={"ID":"ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7","Type":"ContainerStarted","Data":"29235ecc6635606d57fae669c0cd7556cd5414e81d092b3ded341f5190b5f64c"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.864363 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" event={"ID":"4174fb8a-905f-4d3a-9dbc-5212b68319f2","Type":"ContainerStarted","Data":"97cbed8009ac56686b2fb7d91b732fd2f78065a87ad32ecf05c5f3d39618d675"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.865256 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" event={"ID":"5fd3204c-f4d7-466e-94b4-8463575086be","Type":"ContainerStarted","Data":"93830c364b5b8264d2e6bf3f23320e67b152ad274b9a20b92ece45c7196c1746"} Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.865633 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" podUID="4174fb8a-905f-4d3a-9dbc-5212b68319f2" Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.866629 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" podUID="5fd3204c-f4d7-466e-94b4-8463575086be" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.867556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" event={"ID":"c3415733-55e0-4c4f-8bb6-0663ddf67633","Type":"ContainerStarted","Data":"ac51168c7d0d43e151c5d3c46ed01cd54a6be20e7eb67d300159a443a1672117"} Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.868623 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" podUID="c3415733-55e0-4c4f-8bb6-0663ddf67633" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.876225 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" event={"ID":"2d7744e1-c01e-4dd9-87f2-7aa6695c2d60","Type":"ContainerStarted","Data":"d6170cea53f75ab5ccf9e3bf18af5f6f3e34ecf99c16f6238440f8e87f48752e"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.890603 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" event={"ID":"77de585e-c649-4a8e-82e5-fea5379cac6d","Type":"ContainerStarted","Data":"992caf7647daa9d4c5631127c64551fd81f4c6537fcee0826178df5400a24be8"} Jan 26 18:00:06 crc kubenswrapper[4787]: E0126 18:00:06.893664 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" podUID="77de585e-c649-4a8e-82e5-fea5379cac6d" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.896509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" event={"ID":"52a2ecbf-eca4-447b-a516-e8e71194c5ff","Type":"ContainerStarted","Data":"97c355d57120df9db0dac4b5466d3ebc657057d5b870871f5508bdf1b7ad7e61"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.904512 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" event={"ID":"3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59","Type":"ContainerStarted","Data":"93c16bd67162490c11061a154f7f84ae7d07d1468c307af5b99255412af54d2b"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.909548 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" event={"ID":"483c1dd7-425f-43b5-a848-efdc2d9899d0","Type":"ContainerStarted","Data":"41d4a509660d27db390a7e6667cfed00cea1ee34089c435d3e989191e45ed216"} Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.990927 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-br6x5" Jan 26 18:00:06 crc kubenswrapper[4787]: I0126 18:00:06.990985 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-br6x5" Jan 26 18:00:07 crc kubenswrapper[4787]: I0126 18:00:07.040890 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-br6x5" Jan 26 18:00:07 crc kubenswrapper[4787]: I0126 18:00:07.758485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.758636 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.758706 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert podName:8fe5e013-7524-461a-9fae-0867594144d5 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:11.758681744 +0000 UTC m=+980.465817877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert") pod "infra-operator-controller-manager-694cf4f878-m9g4s" (UID: "8fe5e013-7524-461a-9fae-0867594144d5") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.921763 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" podUID="5fd3204c-f4d7-466e-94b4-8463575086be" Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.926998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" podUID="4174fb8a-905f-4d3a-9dbc-5212b68319f2" Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.926998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" podUID="77de585e-c649-4a8e-82e5-fea5379cac6d" Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.927021 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" podUID="8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3" Jan 26 18:00:07 crc kubenswrapper[4787]: E0126 18:00:07.927044 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" podUID="c3415733-55e0-4c4f-8bb6-0663ddf67633" Jan 26 18:00:08 crc kubenswrapper[4787]: I0126 18:00:08.063535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:08 crc kubenswrapper[4787]: E0126 18:00:08.063915 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:08 crc kubenswrapper[4787]: E0126 18:00:08.064009 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert podName:70abdb24-0f0e-477a-8c22-7a01f73c05f2 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:12.063988642 +0000 UTC m=+980.771124775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert") pod "openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" (UID: "70abdb24-0f0e-477a-8c22-7a01f73c05f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:08 crc kubenswrapper[4787]: I0126 18:00:08.073374 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-br6x5" Jan 26 18:00:08 crc kubenswrapper[4787]: I0126 18:00:08.775893 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:08 crc kubenswrapper[4787]: E0126 18:00:08.776091 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:00:08 crc kubenswrapper[4787]: E0126 18:00:08.776158 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:12.776140679 +0000 UTC m=+981.483276812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "metrics-server-cert" not found Jan 26 18:00:08 crc kubenswrapper[4787]: I0126 18:00:08.776932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:08 crc kubenswrapper[4787]: E0126 18:00:08.777052 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:00:08 crc kubenswrapper[4787]: E0126 18:00:08.777081 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:12.77707214 +0000 UTC m=+981.484208273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "webhook-server-cert" not found Jan 26 18:00:10 crc kubenswrapper[4787]: I0126 18:00:10.659118 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-br6x5"] Jan 26 18:00:10 crc kubenswrapper[4787]: I0126 18:00:10.659354 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-br6x5" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" containerID="cri-o://32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" gracePeriod=2 Jan 26 18:00:10 crc kubenswrapper[4787]: I0126 18:00:10.948145 4787 generic.go:334] "Generic (PLEG): container finished" podID="02a84a32-50ab-4e52-a923-62f079ecec13" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" exitCode=0 Jan 26 18:00:10 crc kubenswrapper[4787]: I0126 18:00:10.948190 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br6x5" event={"ID":"02a84a32-50ab-4e52-a923-62f079ecec13","Type":"ContainerDied","Data":"32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb"} Jan 26 18:00:11 crc kubenswrapper[4787]: I0126 18:00:11.818931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:11 crc kubenswrapper[4787]: E0126 18:00:11.819092 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:11 crc kubenswrapper[4787]: E0126 18:00:11.819322 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert podName:8fe5e013-7524-461a-9fae-0867594144d5 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:19.819124014 +0000 UTC m=+988.526260147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert") pod "infra-operator-controller-manager-694cf4f878-m9g4s" (UID: "8fe5e013-7524-461a-9fae-0867594144d5") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:12 crc kubenswrapper[4787]: I0126 18:00:12.122141 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:12 crc kubenswrapper[4787]: E0126 18:00:12.122670 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:12 crc kubenswrapper[4787]: E0126 18:00:12.122826 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert podName:70abdb24-0f0e-477a-8c22-7a01f73c05f2 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:20.122804776 +0000 UTC m=+988.829940909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert") pod "openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" (UID: "70abdb24-0f0e-477a-8c22-7a01f73c05f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:00:12 crc kubenswrapper[4787]: I0126 18:00:12.833146 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:12 crc kubenswrapper[4787]: I0126 18:00:12.833236 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:12 crc kubenswrapper[4787]: E0126 18:00:12.833375 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:00:12 crc kubenswrapper[4787]: E0126 18:00:12.833431 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:20.833411207 +0000 UTC m=+989.540547340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "webhook-server-cert" not found Jan 26 18:00:12 crc kubenswrapper[4787]: E0126 18:00:12.833563 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:00:12 crc kubenswrapper[4787]: E0126 18:00:12.833707 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs podName:b2e37e1d-9342-4006-9626-273178d301b0 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:20.833679463 +0000 UTC m=+989.540815636 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs") pod "openstack-operator-controller-manager-b585d977c-4h4sg" (UID: "b2e37e1d-9342-4006-9626-273178d301b0") : secret "metrics-server-cert" not found Jan 26 18:00:16 crc kubenswrapper[4787]: I0126 18:00:16.652894 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 18:00:16 crc kubenswrapper[4787]: I0126 18:00:16.717179 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gwtn"] Jan 26 18:00:16 crc kubenswrapper[4787]: I0126 18:00:16.808550 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:00:16 crc kubenswrapper[4787]: I0126 18:00:16.808620 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:00:16 crc kubenswrapper[4787]: I0126 18:00:16.988253 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9gwtn" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="registry-server" containerID="cri-o://89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" gracePeriod=2 Jan 26 18:00:16 crc kubenswrapper[4787]: E0126 18:00:16.991003 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:16 crc kubenswrapper[4787]: E0126 18:00:16.991803 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:16 crc kubenswrapper[4787]: E0126 18:00:16.993914 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:16 crc kubenswrapper[4787]: E0126 18:00:16.994078 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-br6x5" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" Jan 26 18:00:19 crc kubenswrapper[4787]: I0126 18:00:19.836451 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:19 crc kubenswrapper[4787]: E0126 18:00:19.836564 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:19 crc kubenswrapper[4787]: E0126 18:00:19.836915 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert podName:8fe5e013-7524-461a-9fae-0867594144d5 nodeName:}" failed. No retries permitted until 2026-01-26 18:00:35.836900587 +0000 UTC m=+1004.544036720 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert") pod "infra-operator-controller-manager-694cf4f878-m9g4s" (UID: "8fe5e013-7524-461a-9fae-0867594144d5") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.140690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.147734 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70abdb24-0f0e-477a-8c22-7a01f73c05f2-cert\") pod \"openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5\" (UID: \"70abdb24-0f0e-477a-8c22-7a01f73c05f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.314817 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.850984 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.851473 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.858107 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-webhook-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.862081 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2e37e1d-9342-4006-9626-273178d301b0-metrics-certs\") pod \"openstack-operator-controller-manager-b585d977c-4h4sg\" (UID: \"b2e37e1d-9342-4006-9626-273178d301b0\") " pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:20 crc kubenswrapper[4787]: I0126 18:00:20.933937 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.588264 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.588918 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.589234 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.589260 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-9gwtn" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="registry-server" Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.991467 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.992170 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.992561 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:26.992616 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-br6x5" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" Jan 26 18:00:29 crc kubenswrapper[4787]: I0126 18:00:29.087786 4787 generic.go:334] "Generic (PLEG): container finished" podID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" exitCode=0 Jan 26 18:00:29 crc kubenswrapper[4787]: I0126 18:00:29.087825 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gwtn" event={"ID":"8b3db753-c982-467b-b36a-c9d8a1e0d0a6","Type":"ContainerDied","Data":"89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71"} Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:29.520689 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:29.520867 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67h55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-tcj6b_openstack-operators(b0561e24-5692-4790-9cbd-3a74a8c3ce69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:29 crc kubenswrapper[4787]: E0126 18:00:29.522188 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" podUID="b0561e24-5692-4790-9cbd-3a74a8c3ce69" Jan 26 18:00:30 crc kubenswrapper[4787]: E0126 18:00:30.096090 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" podUID="b0561e24-5692-4790-9cbd-3a74a8c3ce69" Jan 26 18:00:31 crc kubenswrapper[4787]: E0126 18:00:31.213984 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 18:00:31 crc kubenswrapper[4787]: E0126 18:00:31.214251 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n828r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-nvpxf_openstack-operators(62502932-23fe-4a77-a89a-26fd15f0f44f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:31 crc kubenswrapper[4787]: E0126 18:00:31.215472 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" podUID="62502932-23fe-4a77-a89a-26fd15f0f44f" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.109917 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" podUID="62502932-23fe-4a77-a89a-26fd15f0f44f" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.490530 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.490741 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6w56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-gfrbb_openstack-operators(87ebc1f2-80bd-46db-a605-c3667e656f5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.491991 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" podUID="87ebc1f2-80bd-46db-a605-c3667e656f5b" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.873555 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.873737 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggg4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-trx4l_openstack-operators(3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:32 crc kubenswrapper[4787]: E0126 18:00:32.874938 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" podUID="3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.115042 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" podUID="3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.115503 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" podUID="87ebc1f2-80bd-46db-a605-c3667e656f5b" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.195907 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.196109 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bf7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-w7bvz_openstack-operators(a72edcac-00ca-46a7-ab30-551c750eb2cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.197346 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" podUID="a72edcac-00ca-46a7-ab30-551c750eb2cd" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.807426 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.807613 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pt2qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-4wd75_openstack-operators(483c1dd7-425f-43b5-a848-efdc2d9899d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:33 crc kubenswrapper[4787]: E0126 18:00:33.808816 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" podUID="483c1dd7-425f-43b5-a848-efdc2d9899d0" Jan 26 18:00:34 crc kubenswrapper[4787]: E0126 18:00:34.120000 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" podUID="483c1dd7-425f-43b5-a848-efdc2d9899d0" Jan 26 18:00:34 crc kubenswrapper[4787]: E0126 18:00:34.122070 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" podUID="a72edcac-00ca-46a7-ab30-551c750eb2cd" Jan 26 18:00:34 crc kubenswrapper[4787]: E0126 18:00:34.770249 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 18:00:34 crc kubenswrapper[4787]: E0126 18:00:34.770430 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-9n2vm_openstack-operators(6e4131c6-1507-4b15-92b7-29a2fe7f3775): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:34 crc kubenswrapper[4787]: E0126 18:00:34.771620 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" podUID="6e4131c6-1507-4b15-92b7-29a2fe7f3775" Jan 26 18:00:35 crc kubenswrapper[4787]: E0126 18:00:35.127325 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" podUID="6e4131c6-1507-4b15-92b7-29a2fe7f3775" Jan 26 18:00:35 crc kubenswrapper[4787]: E0126 18:00:35.382347 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 26 18:00:35 crc kubenswrapper[4787]: E0126 18:00:35.382598 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtjck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-ggtn5_openstack-operators(3923acf7-e06a-4351-84aa-7def61b4ca71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:35 crc kubenswrapper[4787]: E0126 18:00:35.383834 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" podUID="3923acf7-e06a-4351-84aa-7def61b4ca71" Jan 26 18:00:35 crc kubenswrapper[4787]: I0126 18:00:35.899496 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:35 crc kubenswrapper[4787]: I0126 18:00:35.905438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fe5e013-7524-461a-9fae-0867594144d5-cert\") pod \"infra-operator-controller-manager-694cf4f878-m9g4s\" (UID: \"8fe5e013-7524-461a-9fae-0867594144d5\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:35 crc kubenswrapper[4787]: I0126 18:00:35.942400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.133620 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" podUID="3923acf7-e06a-4351-84aa-7def61b4ca71" Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.588846 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.589243 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.589562 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.589596 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-9gwtn" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="registry-server" Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.853512 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.854067 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-x8ql6_openstack-operators(2d7744e1-c01e-4dd9-87f2-7aa6695c2d60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.855250 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" podUID="2d7744e1-c01e-4dd9-87f2-7aa6695c2d60" Jan 26 18:00:36 crc kubenswrapper[4787]: I0126 18:00:36.899418 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.992442 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.992912 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.993481 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:00:36 crc kubenswrapper[4787]: E0126 18:00:36.993559 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-br6x5" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.015436 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-292fk\" (UniqueName: \"kubernetes.io/projected/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-kube-api-access-292fk\") pod \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.015539 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-catalog-content\") pod \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.015602 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-utilities\") pod \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\" (UID: \"8b3db753-c982-467b-b36a-c9d8a1e0d0a6\") " Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.016676 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-utilities" (OuterVolumeSpecName: "utilities") pod "8b3db753-c982-467b-b36a-c9d8a1e0d0a6" (UID: "8b3db753-c982-467b-b36a-c9d8a1e0d0a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.022240 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-kube-api-access-292fk" (OuterVolumeSpecName: "kube-api-access-292fk") pod "8b3db753-c982-467b-b36a-c9d8a1e0d0a6" (UID: "8b3db753-c982-467b-b36a-c9d8a1e0d0a6"). InnerVolumeSpecName "kube-api-access-292fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.046016 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b3db753-c982-467b-b36a-c9d8a1e0d0a6" (UID: "8b3db753-c982-467b-b36a-c9d8a1e0d0a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.117049 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-292fk\" (UniqueName: \"kubernetes.io/projected/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-kube-api-access-292fk\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.117096 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.117109 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3db753-c982-467b-b36a-c9d8a1e0d0a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.139410 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gwtn" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.142837 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gwtn" event={"ID":"8b3db753-c982-467b-b36a-c9d8a1e0d0a6","Type":"ContainerDied","Data":"405b256ddb8990092f339f928dcba11c4edb86584de89e6985e62cccf0dd4df4"} Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.142913 4787 scope.go:117] "RemoveContainer" containerID="89b42eb78c83620bd7e9db4ee994fb10657dc5bf149bcb3e3774c22eaa1a3a71" Jan 26 18:00:37 crc kubenswrapper[4787]: E0126 18:00:37.143760 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" podUID="2d7744e1-c01e-4dd9-87f2-7aa6695c2d60" Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.192170 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gwtn"] Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.198284 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gwtn"] Jan 26 18:00:37 crc kubenswrapper[4787]: I0126 18:00:37.601337 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" path="/var/lib/kubelet/pods/8b3db753-c982-467b-b36a-c9d8a1e0d0a6/volumes" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.123184 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.123693 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zx2hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-zhtc5_openstack-operators(ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.124929 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" podUID="ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.165061 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" podUID="ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.732917 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.733146 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dts9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-bswjx_openstack-operators(01d314a4-2f86-4cc3-ac94-7a09b363a05d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:00:41 crc kubenswrapper[4787]: E0126 18:00:41.734442 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" podUID="01d314a4-2f86-4cc3-ac94-7a09b363a05d" Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.802204 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br6x5" Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.904970 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-catalog-content\") pod \"02a84a32-50ab-4e52-a923-62f079ecec13\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.905118 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j522g\" (UniqueName: \"kubernetes.io/projected/02a84a32-50ab-4e52-a923-62f079ecec13-kube-api-access-j522g\") pod \"02a84a32-50ab-4e52-a923-62f079ecec13\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.905150 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-utilities\") pod \"02a84a32-50ab-4e52-a923-62f079ecec13\" (UID: \"02a84a32-50ab-4e52-a923-62f079ecec13\") " Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.906700 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-utilities" (OuterVolumeSpecName: "utilities") pod "02a84a32-50ab-4e52-a923-62f079ecec13" (UID: "02a84a32-50ab-4e52-a923-62f079ecec13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.921354 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a84a32-50ab-4e52-a923-62f079ecec13-kube-api-access-j522g" (OuterVolumeSpecName: "kube-api-access-j522g") pod "02a84a32-50ab-4e52-a923-62f079ecec13" (UID: "02a84a32-50ab-4e52-a923-62f079ecec13"). InnerVolumeSpecName "kube-api-access-j522g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:00:41 crc kubenswrapper[4787]: I0126 18:00:41.963453 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02a84a32-50ab-4e52-a923-62f079ecec13" (UID: "02a84a32-50ab-4e52-a923-62f079ecec13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.006927 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j522g\" (UniqueName: \"kubernetes.io/projected/02a84a32-50ab-4e52-a923-62f079ecec13-kube-api-access-j522g\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.006988 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.006999 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a84a32-50ab-4e52-a923-62f079ecec13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.173643 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br6x5" event={"ID":"02a84a32-50ab-4e52-a923-62f079ecec13","Type":"ContainerDied","Data":"14f778a97aa3bd485ac6ea5fe9e0515bd7f0cf98da8b7dee74f9375ea4a27c3a"} Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.173667 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br6x5" Jan 26 18:00:42 crc kubenswrapper[4787]: E0126 18:00:42.174897 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" podUID="01d314a4-2f86-4cc3-ac94-7a09b363a05d" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.208388 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-br6x5"] Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.214514 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-br6x5"] Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.315087 4787 scope.go:117] "RemoveContainer" containerID="cf54e1e7784652ae4f2205720aea24ea42c78e8f1cfe0e45aa19773a3da55583" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.391007 4787 scope.go:117] "RemoveContainer" containerID="bf388de7a1c6c30b6a8160b7348414d039d818fb5098345aaf3cf9fb5e79897e" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.488647 4787 scope.go:117] "RemoveContainer" containerID="32e4d6d309d1d3e4ca590f7f1af50495367ade177d819d5221204aa0c9a882bb" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.549454 4787 scope.go:117] "RemoveContainer" containerID="61cf25cec3ace628ed559fdbb873ce7de95f3e7c582bd0c0b89a3d70af9aafb2" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.620997 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5"] Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.625841 4787 scope.go:117] "RemoveContainer" containerID="f49425525db862620ad59af0fc64f22a01bf24b2cb3750f27e79082b304f5db6" Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.664809 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.813915 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg"] Jan 26 18:00:42 crc kubenswrapper[4787]: W0126 18:00:42.814981 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2e37e1d_9342_4006_9626_273178d301b0.slice/crio-0d5f1b2504e07945808ac7f781a119a7a7b8fddc487872a30d5eb440b5994d02 WatchSource:0}: Error finding container 0d5f1b2504e07945808ac7f781a119a7a7b8fddc487872a30d5eb440b5994d02: Status 404 returned error can't find the container with id 0d5f1b2504e07945808ac7f781a119a7a7b8fddc487872a30d5eb440b5994d02 Jan 26 18:00:42 crc kubenswrapper[4787]: I0126 18:00:42.923916 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s"] Jan 26 18:00:42 crc kubenswrapper[4787]: W0126 18:00:42.938522 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe5e013_7524_461a_9fae_0867594144d5.slice/crio-eb3a5f7cd1792ce6baa5bf4d3d65a608b3c76689eb387dd07e3c0d6a98336541 WatchSource:0}: Error finding container eb3a5f7cd1792ce6baa5bf4d3d65a608b3c76689eb387dd07e3c0d6a98336541: Status 404 returned error can't find the container with id eb3a5f7cd1792ce6baa5bf4d3d65a608b3c76689eb387dd07e3c0d6a98336541 Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.182983 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" event={"ID":"8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3","Type":"ContainerStarted","Data":"cc8d0483ea70420fea89038b81e98cb4f22f5f9c5dd207449313c6515f8096a4"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.183502 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.184271 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" event={"ID":"b2e37e1d-9342-4006-9626-273178d301b0","Type":"ContainerStarted","Data":"58a46b16753430ee265baa83e2eed61af897af7091db34103a10c9a594d54ca7"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.184296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" event={"ID":"b2e37e1d-9342-4006-9626-273178d301b0","Type":"ContainerStarted","Data":"0d5f1b2504e07945808ac7f781a119a7a7b8fddc487872a30d5eb440b5994d02"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.184564 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.186315 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" event={"ID":"77de585e-c649-4a8e-82e5-fea5379cac6d","Type":"ContainerStarted","Data":"749cbf10fc1621c6f3b9430475a35b9b1d66f4c84e2057a653e048e218422fa8"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.187822 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" event={"ID":"302ec2db-963c-44d7-941e-51471e7ae3bb","Type":"ContainerStarted","Data":"223c1f3c96513c54309f7211abbeb40ef51bc93ad657a30ec1f24489f7d74582"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.187958 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.189305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" event={"ID":"8fe5e013-7524-461a-9fae-0867594144d5","Type":"ContainerStarted","Data":"eb3a5f7cd1792ce6baa5bf4d3d65a608b3c76689eb387dd07e3c0d6a98336541"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.191226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" event={"ID":"4174fb8a-905f-4d3a-9dbc-5212b68319f2","Type":"ContainerStarted","Data":"2b3859fcfdc0e46ec3b5e6e1c3cc32f4ccd8f85b90ce471bfa04bd2b862123f6"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.191407 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.192885 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" event={"ID":"52a2ecbf-eca4-447b-a516-e8e71194c5ff","Type":"ContainerStarted","Data":"8643a33d83744e558c6de5ef9d39fc3a2e6e4b81676bb2eb31ec0264c4441c70"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.192993 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.195161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" event={"ID":"c5f0e34d-3e5d-458a-a560-08769cb30849","Type":"ContainerStarted","Data":"9c4f97d271f99afb198b660b3b6e2c1e08fb571f586e0a080c1c990bb0217bbc"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.195234 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.196511 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" event={"ID":"5fd3204c-f4d7-466e-94b4-8463575086be","Type":"ContainerStarted","Data":"224a758615b0f95771357008e90f0d4af0b20639e689c5c665c2f4ccdeb3ff29"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.196729 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.197768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" event={"ID":"c3415733-55e0-4c4f-8bb6-0663ddf67633","Type":"ContainerStarted","Data":"3fd44d3bd85f50bc5864ba89413102b91b743e9b42bdf1d605dc14df80c655b8"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.197935 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.200401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" event={"ID":"a42402ec-c6f5-4be4-b649-1bfb41ebf1b0","Type":"ContainerStarted","Data":"7161812b0703f999da1b2c6ed582af7fc67c5e8ddce835e23ca53b0eb9caed0f"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.200554 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.201808 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" event={"ID":"70abdb24-0f0e-477a-8c22-7a01f73c05f2","Type":"ContainerStarted","Data":"04f3db61bbb4e218d32a28eedf7a605121bbeafd9d3dbf183f619469f56fde69"} Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.347499 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" podStartSLOduration=4.221493217 podStartE2EDuration="40.347473814s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.1937099 +0000 UTC m=+974.900846043" lastFinishedPulling="2026-01-26 18:00:42.319690507 +0000 UTC m=+1011.026826640" observedRunningTime="2026-01-26 18:00:43.302724829 +0000 UTC m=+1012.009860972" watchObservedRunningTime="2026-01-26 18:00:43.347473814 +0000 UTC m=+1012.054609957" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.371502 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" podStartSLOduration=7.339052537 podStartE2EDuration="40.371478354s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:05.613586429 +0000 UTC m=+974.320722562" lastFinishedPulling="2026-01-26 18:00:38.646012246 +0000 UTC m=+1007.353148379" observedRunningTime="2026-01-26 18:00:43.368337723 +0000 UTC m=+1012.075473866" watchObservedRunningTime="2026-01-26 18:00:43.371478354 +0000 UTC m=+1012.078614487" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.406309 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" podStartSLOduration=4.284648277 podStartE2EDuration="40.406288906s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.198193711 +0000 UTC m=+974.905329844" lastFinishedPulling="2026-01-26 18:00:42.31983434 +0000 UTC m=+1011.026970473" observedRunningTime="2026-01-26 18:00:43.401914018 +0000 UTC m=+1012.109050141" watchObservedRunningTime="2026-01-26 18:00:43.406288906 +0000 UTC m=+1012.113425059" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.430012 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" podStartSLOduration=7.388572559 podStartE2EDuration="40.429988928s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:05.604676128 +0000 UTC m=+974.311812261" lastFinishedPulling="2026-01-26 18:00:38.646092497 +0000 UTC m=+1007.353228630" observedRunningTime="2026-01-26 18:00:43.420452734 +0000 UTC m=+1012.127588867" watchObservedRunningTime="2026-01-26 18:00:43.429988928 +0000 UTC m=+1012.137125061" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.452736 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" podStartSLOduration=6.945160431 podStartE2EDuration="39.452699048s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.138502969 +0000 UTC m=+974.845639092" lastFinishedPulling="2026-01-26 18:00:38.646041576 +0000 UTC m=+1007.353177709" observedRunningTime="2026-01-26 18:00:43.445077446 +0000 UTC m=+1012.152213599" watchObservedRunningTime="2026-01-26 18:00:43.452699048 +0000 UTC m=+1012.159835181" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.472151 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" podStartSLOduration=3.307459769 podStartE2EDuration="39.472128815s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.20084085 +0000 UTC m=+974.907976983" lastFinishedPulling="2026-01-26 18:00:42.365509896 +0000 UTC m=+1011.072646029" observedRunningTime="2026-01-26 18:00:43.46925677 +0000 UTC m=+1012.176392913" watchObservedRunningTime="2026-01-26 18:00:43.472128815 +0000 UTC m=+1012.179264948" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.500509 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" podStartSLOduration=3.381739377 podStartE2EDuration="39.500486221s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.200701367 +0000 UTC m=+974.907837510" lastFinishedPulling="2026-01-26 18:00:42.319448221 +0000 UTC m=+1011.026584354" observedRunningTime="2026-01-26 18:00:43.49462284 +0000 UTC m=+1012.201758983" watchObservedRunningTime="2026-01-26 18:00:43.500486221 +0000 UTC m=+1012.207622364" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.512235 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" podStartSLOduration=8.004384621 podStartE2EDuration="40.512190714s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.138226423 +0000 UTC m=+974.845362556" lastFinishedPulling="2026-01-26 18:00:38.646032516 +0000 UTC m=+1007.353168649" observedRunningTime="2026-01-26 18:00:43.508918881 +0000 UTC m=+1012.216055034" watchObservedRunningTime="2026-01-26 18:00:43.512190714 +0000 UTC m=+1012.219326847" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.543130 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" podStartSLOduration=39.543112458 podStartE2EDuration="39.543112458s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:00:43.540824478 +0000 UTC m=+1012.247960631" watchObservedRunningTime="2026-01-26 18:00:43.543112458 +0000 UTC m=+1012.250248591" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.568328 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n5tjf" podStartSLOduration=3.335787584 podStartE2EDuration="39.568308734s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.199843068 +0000 UTC m=+974.906979201" lastFinishedPulling="2026-01-26 18:00:42.432364228 +0000 UTC m=+1011.139500351" observedRunningTime="2026-01-26 18:00:43.562595036 +0000 UTC m=+1012.269731169" watchObservedRunningTime="2026-01-26 18:00:43.568308734 +0000 UTC m=+1012.275444867" Jan 26 18:00:43 crc kubenswrapper[4787]: I0126 18:00:43.605063 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" path="/var/lib/kubelet/pods/02a84a32-50ab-4e52-a923-62f079ecec13/volumes" Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.241471 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" event={"ID":"70abdb24-0f0e-477a-8c22-7a01f73c05f2","Type":"ContainerStarted","Data":"f56e5a21c18ce745de80626af244633406f6916933a40899cc4e0d67b9230c1f"} Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.242299 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.247994 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" event={"ID":"8fe5e013-7524-461a-9fae-0867594144d5","Type":"ContainerStarted","Data":"33c260858b1adebaec525e29240ca2f1cef13fe428d2d805a4329fe4eabb6a87"} Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.248196 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.281866 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" podStartSLOduration=39.051837103 podStartE2EDuration="42.281843049s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:42.664582224 +0000 UTC m=+1011.371718357" lastFinishedPulling="2026-01-26 18:00:45.89458818 +0000 UTC m=+1014.601724303" observedRunningTime="2026-01-26 18:00:46.274575345 +0000 UTC m=+1014.981711488" watchObservedRunningTime="2026-01-26 18:00:46.281843049 +0000 UTC m=+1014.988979182" Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.302889 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" podStartSLOduration=40.350773939 podStartE2EDuration="43.302870181s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:42.944153105 +0000 UTC m=+1011.651289238" lastFinishedPulling="2026-01-26 18:00:45.896249347 +0000 UTC m=+1014.603385480" observedRunningTime="2026-01-26 18:00:46.299571877 +0000 UTC m=+1015.006708010" watchObservedRunningTime="2026-01-26 18:00:46.302870181 +0000 UTC m=+1015.010006314" Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.807675 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:00:46 crc kubenswrapper[4787]: I0126 18:00:46.807753 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:00:47 crc kubenswrapper[4787]: I0126 18:00:47.257160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" event={"ID":"62502932-23fe-4a77-a89a-26fd15f0f44f","Type":"ContainerStarted","Data":"62d835c788160030e4aaa692f672d3b7da81e9f52074978afc99b68b7724242e"} Jan 26 18:00:47 crc kubenswrapper[4787]: I0126 18:00:47.285154 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" podStartSLOduration=3.639624017 podStartE2EDuration="44.285122846s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:05.607304408 +0000 UTC m=+974.314440541" lastFinishedPulling="2026-01-26 18:00:46.252803237 +0000 UTC m=+1014.959939370" observedRunningTime="2026-01-26 18:00:47.282051287 +0000 UTC m=+1015.989187440" watchObservedRunningTime="2026-01-26 18:00:47.285122846 +0000 UTC m=+1015.992259019" Jan 26 18:00:48 crc kubenswrapper[4787]: I0126 18:00:48.263377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" event={"ID":"a72edcac-00ca-46a7-ab30-551c750eb2cd","Type":"ContainerStarted","Data":"bd24b1b23d7bd02e22e8b2e66508652e6262a373c210693545f4eabf84e8d0cc"} Jan 26 18:00:48 crc kubenswrapper[4787]: I0126 18:00:48.263904 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:48 crc kubenswrapper[4787]: I0126 18:00:48.264627 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" event={"ID":"b0561e24-5692-4790-9cbd-3a74a8c3ce69","Type":"ContainerStarted","Data":"7fc265e6e4642bd34cf8565e8fd592257f5b3d062f2428dcb64812f91161f793"} Jan 26 18:00:48 crc kubenswrapper[4787]: I0126 18:00:48.264832 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:48 crc kubenswrapper[4787]: I0126 18:00:48.284920 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" podStartSLOduration=2.144948371 podStartE2EDuration="45.284895163s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:04.915251062 +0000 UTC m=+973.622387195" lastFinishedPulling="2026-01-26 18:00:48.055197854 +0000 UTC m=+1016.762333987" observedRunningTime="2026-01-26 18:00:48.278415497 +0000 UTC m=+1016.985551630" watchObservedRunningTime="2026-01-26 18:00:48.284895163 +0000 UTC m=+1016.992031296" Jan 26 18:00:48 crc kubenswrapper[4787]: I0126 18:00:48.303160 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" podStartSLOduration=3.497830541 podStartE2EDuration="45.302735834s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.109101569 +0000 UTC m=+974.816237702" lastFinishedPulling="2026-01-26 18:00:47.914006862 +0000 UTC m=+1016.621142995" observedRunningTime="2026-01-26 18:00:48.30076129 +0000 UTC m=+1017.007897423" watchObservedRunningTime="2026-01-26 18:00:48.302735834 +0000 UTC m=+1017.009871967" Jan 26 18:00:49 crc kubenswrapper[4787]: I0126 18:00:49.271624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" event={"ID":"483c1dd7-425f-43b5-a848-efdc2d9899d0","Type":"ContainerStarted","Data":"9b36659cdf90bbfc4e94017dc3ffc6edcd3c614f7ef4eb022ffd16262a814a72"} Jan 26 18:00:49 crc kubenswrapper[4787]: I0126 18:00:49.272149 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:49 crc kubenswrapper[4787]: I0126 18:00:49.287351 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" podStartSLOduration=3.169021969 podStartE2EDuration="45.287334271s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.137705492 +0000 UTC m=+974.844841625" lastFinishedPulling="2026-01-26 18:00:48.256017794 +0000 UTC m=+1016.963153927" observedRunningTime="2026-01-26 18:00:49.286351239 +0000 UTC m=+1017.993487362" watchObservedRunningTime="2026-01-26 18:00:49.287334271 +0000 UTC m=+1017.994470404" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.278768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" event={"ID":"3923acf7-e06a-4351-84aa-7def61b4ca71","Type":"ContainerStarted","Data":"8ada282b24b1d86e9c28cfd8c25bf2ef758c781025c176f99364265413136984"} Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.279004 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.279967 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" event={"ID":"3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59","Type":"ContainerStarted","Data":"321c8b2debe9df987c31fafd11ba88934bd7087ae2dd4a7c6214308174aa3b9e"} Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.280541 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.283487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" event={"ID":"87ebc1f2-80bd-46db-a605-c3667e656f5b","Type":"ContainerStarted","Data":"aa5041d343f382da96472b17f305495bfd9dd41b9bb4098b03c7e6950510ad96"} Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.283717 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.305325 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" podStartSLOduration=3.335557736 podStartE2EDuration="47.305306658s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:05.209741147 +0000 UTC m=+973.916877280" lastFinishedPulling="2026-01-26 18:00:49.179490069 +0000 UTC m=+1017.886626202" observedRunningTime="2026-01-26 18:00:50.302971265 +0000 UTC m=+1019.010107408" watchObservedRunningTime="2026-01-26 18:00:50.305306658 +0000 UTC m=+1019.012442791" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.323576 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" podStartSLOduration=3.938627264 podStartE2EDuration="47.323558398s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.137364375 +0000 UTC m=+974.844500508" lastFinishedPulling="2026-01-26 18:00:49.522295519 +0000 UTC m=+1018.229431642" observedRunningTime="2026-01-26 18:00:50.317809569 +0000 UTC m=+1019.024945712" watchObservedRunningTime="2026-01-26 18:00:50.323558398 +0000 UTC m=+1019.030694531" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.335002 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" podStartSLOduration=2.977524757 podStartE2EDuration="46.334985544s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.165460206 +0000 UTC m=+974.872596339" lastFinishedPulling="2026-01-26 18:00:49.522920993 +0000 UTC m=+1018.230057126" observedRunningTime="2026-01-26 18:00:50.331458956 +0000 UTC m=+1019.038595089" watchObservedRunningTime="2026-01-26 18:00:50.334985544 +0000 UTC m=+1019.042121677" Jan 26 18:00:50 crc kubenswrapper[4787]: I0126 18:00:50.940536 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-b585d977c-4h4sg" Jan 26 18:00:51 crc kubenswrapper[4787]: I0126 18:00:51.291777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" event={"ID":"6e4131c6-1507-4b15-92b7-29a2fe7f3775","Type":"ContainerStarted","Data":"b60afcba63a3d9c1399acbde92ef613aaec81180d0db60a4d6d6cb315d17f524"} Jan 26 18:00:51 crc kubenswrapper[4787]: I0126 18:00:51.292294 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:00:51 crc kubenswrapper[4787]: I0126 18:00:51.310614 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" podStartSLOduration=2.244289683 podStartE2EDuration="48.31059736s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:04.931792354 +0000 UTC m=+973.638928497" lastFinishedPulling="2026-01-26 18:00:50.998100041 +0000 UTC m=+1019.705236174" observedRunningTime="2026-01-26 18:00:51.309342092 +0000 UTC m=+1020.016478235" watchObservedRunningTime="2026-01-26 18:00:51.31059736 +0000 UTC m=+1020.017733503" Jan 26 18:00:52 crc kubenswrapper[4787]: I0126 18:00:52.308276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" event={"ID":"2d7744e1-c01e-4dd9-87f2-7aa6695c2d60","Type":"ContainerStarted","Data":"61e5fcb599b234dc966603ccc8e41e262847e1173ce527e507b559b83026a4ca"} Jan 26 18:00:52 crc kubenswrapper[4787]: I0126 18:00:52.309638 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.129368 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-w7bvz" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.150644 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" podStartSLOduration=4.228822225 podStartE2EDuration="50.150621555s" podCreationTimestamp="2026-01-26 18:00:04 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.159519132 +0000 UTC m=+974.866655265" lastFinishedPulling="2026-01-26 18:00:52.081318462 +0000 UTC m=+1020.788454595" observedRunningTime="2026-01-26 18:00:52.343280538 +0000 UTC m=+1021.050416671" watchObservedRunningTime="2026-01-26 18:00:54.150621555 +0000 UTC m=+1022.857757688" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.163140 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ggtn5" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.311417 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glflp" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.324589 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.326403 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-nvpxf" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.360230 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-twprk" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.487882 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-gfrbb" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.535733 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-tcj6b" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.552398 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.580729 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4v6fm" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.704390 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-vqc5p" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.749596 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-4wd75" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.756728 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-trx4l" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.783208 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6k5bf" Jan 26 18:00:54 crc kubenswrapper[4787]: I0126 18:00:54.883231 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:00:55 crc kubenswrapper[4787]: I0126 18:00:55.094847 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-fmkqf" Jan 26 18:00:55 crc kubenswrapper[4787]: I0126 18:00:55.947934 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m9g4s" Jan 26 18:00:56 crc kubenswrapper[4787]: I0126 18:00:56.334900 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" event={"ID":"01d314a4-2f86-4cc3-ac94-7a09b363a05d","Type":"ContainerStarted","Data":"149a8d1835575eff65f7d8ecb6ca2e7123b3dd6e320ecc3fd760670036e0b86d"} Jan 26 18:00:56 crc kubenswrapper[4787]: I0126 18:00:56.335618 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:00:56 crc kubenswrapper[4787]: I0126 18:00:56.360223 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" podStartSLOduration=2.943231603 podStartE2EDuration="53.360202799s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:05.704409738 +0000 UTC m=+974.411545871" lastFinishedPulling="2026-01-26 18:00:56.121380914 +0000 UTC m=+1024.828517067" observedRunningTime="2026-01-26 18:00:56.355429822 +0000 UTC m=+1025.062565965" watchObservedRunningTime="2026-01-26 18:00:56.360202799 +0000 UTC m=+1025.067338942" Jan 26 18:00:58 crc kubenswrapper[4787]: I0126 18:00:58.347437 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" event={"ID":"ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7","Type":"ContainerStarted","Data":"a99ee0cb153d450dd0ef0020d16b8df35a9024ebe5d079616cf63fd8aaf5cabe"} Jan 26 18:00:58 crc kubenswrapper[4787]: I0126 18:00:58.347931 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:00:58 crc kubenswrapper[4787]: I0126 18:00:58.361428 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" podStartSLOduration=4.239051582 podStartE2EDuration="55.361405992s" podCreationTimestamp="2026-01-26 18:00:03 +0000 UTC" firstStartedPulling="2026-01-26 18:00:06.098160104 +0000 UTC m=+974.805296237" lastFinishedPulling="2026-01-26 18:00:57.220514514 +0000 UTC m=+1025.927650647" observedRunningTime="2026-01-26 18:00:58.359084579 +0000 UTC m=+1027.066220732" watchObservedRunningTime="2026-01-26 18:00:58.361405992 +0000 UTC m=+1027.068542125" Jan 26 18:01:00 crc kubenswrapper[4787]: I0126 18:01:00.322413 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5" Jan 26 18:01:04 crc kubenswrapper[4787]: I0126 18:01:04.247740 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9n2vm" Jan 26 18:01:04 crc kubenswrapper[4787]: I0126 18:01:04.457517 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bswjx" Jan 26 18:01:04 crc kubenswrapper[4787]: I0126 18:01:04.608516 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-zhtc5" Jan 26 18:01:04 crc kubenswrapper[4787]: I0126 18:01:04.806057 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-x8ql6" Jan 26 18:01:16 crc kubenswrapper[4787]: I0126 18:01:16.807715 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:01:16 crc kubenswrapper[4787]: I0126 18:01:16.808224 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:01:16 crc kubenswrapper[4787]: I0126 18:01:16.808287 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:01:16 crc kubenswrapper[4787]: I0126 18:01:16.808919 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72456f5ee53807e4dd44f118cd885938e3fabd30b979df1c99e1042ba20d5aff"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:01:16 crc kubenswrapper[4787]: I0126 18:01:16.808987 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://72456f5ee53807e4dd44f118cd885938e3fabd30b979df1c99e1042ba20d5aff" gracePeriod=600 Jan 26 18:01:18 crc kubenswrapper[4787]: I0126 18:01:18.481486 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="72456f5ee53807e4dd44f118cd885938e3fabd30b979df1c99e1042ba20d5aff" exitCode=0 Jan 26 18:01:18 crc kubenswrapper[4787]: I0126 18:01:18.481563 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"72456f5ee53807e4dd44f118cd885938e3fabd30b979df1c99e1042ba20d5aff"} Jan 26 18:01:18 crc kubenswrapper[4787]: I0126 18:01:18.482868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"4b92bcf71cd03a611c9ec00077b120f3d83120698e0ca3da40d94cf74a7cfe86"} Jan 26 18:01:18 crc kubenswrapper[4787]: I0126 18:01:18.482909 4787 scope.go:117] "RemoveContainer" containerID="db3aa5f8ecb00426491aefbe94533ea044737555893f85a79b932f0e6fb23390" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.498821 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-8jr89"] Jan 26 18:01:19 crc kubenswrapper[4787]: E0126 18:01:19.499492 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="extract-utilities" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499510 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="extract-utilities" Jan 26 18:01:19 crc kubenswrapper[4787]: E0126 18:01:19.499529 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="extract-utilities" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499537 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="extract-utilities" Jan 26 18:01:19 crc kubenswrapper[4787]: E0126 18:01:19.499547 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="extract-content" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499555 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="extract-content" Jan 26 18:01:19 crc kubenswrapper[4787]: E0126 18:01:19.499569 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="extract-content" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499576 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="extract-content" Jan 26 18:01:19 crc kubenswrapper[4787]: E0126 18:01:19.499593 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499600 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" Jan 26 18:01:19 crc kubenswrapper[4787]: E0126 18:01:19.499611 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="registry-server" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499618 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="registry-server" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499777 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3db753-c982-467b-b36a-c9d8a1e0d0a6" containerName="registry-server" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.499803 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a84a32-50ab-4e52-a923-62f079ecec13" containerName="registry-server" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.500726 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.503101 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fxtqf" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.503357 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.503640 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.505100 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.506184 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-8jr89"] Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.561560 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-l4nn5"] Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.563214 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.565162 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.572381 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-l4nn5"] Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.636838 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5gvc\" (UniqueName: \"kubernetes.io/projected/e9428aed-0b89-46f9-a84d-a905eb447a90-kube-api-access-s5gvc\") pod \"dnsmasq-dns-84bb9d8bd9-8jr89\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.637256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9428aed-0b89-46f9-a84d-a905eb447a90-config\") pod \"dnsmasq-dns-84bb9d8bd9-8jr89\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.738454 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5gvc\" (UniqueName: \"kubernetes.io/projected/e9428aed-0b89-46f9-a84d-a905eb447a90-kube-api-access-s5gvc\") pod \"dnsmasq-dns-84bb9d8bd9-8jr89\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.738525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-config\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.738563 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfh2\" (UniqueName: \"kubernetes.io/projected/279d6bb9-80e1-4976-b95c-19405c02c1e8-kube-api-access-4dfh2\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.738611 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-dns-svc\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.738675 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9428aed-0b89-46f9-a84d-a905eb447a90-config\") pod \"dnsmasq-dns-84bb9d8bd9-8jr89\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.739552 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9428aed-0b89-46f9-a84d-a905eb447a90-config\") pod \"dnsmasq-dns-84bb9d8bd9-8jr89\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.755587 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5gvc\" (UniqueName: \"kubernetes.io/projected/e9428aed-0b89-46f9-a84d-a905eb447a90-kube-api-access-s5gvc\") pod \"dnsmasq-dns-84bb9d8bd9-8jr89\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.820662 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.840539 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-config\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.840615 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfh2\" (UniqueName: \"kubernetes.io/projected/279d6bb9-80e1-4976-b95c-19405c02c1e8-kube-api-access-4dfh2\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.840664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-dns-svc\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.841427 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-config\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.841525 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-dns-svc\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.866570 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfh2\" (UniqueName: \"kubernetes.io/projected/279d6bb9-80e1-4976-b95c-19405c02c1e8-kube-api-access-4dfh2\") pod \"dnsmasq-dns-5f854695bc-l4nn5\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:19 crc kubenswrapper[4787]: I0126 18:01:19.882355 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:20 crc kubenswrapper[4787]: I0126 18:01:20.241866 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-8jr89"] Jan 26 18:01:20 crc kubenswrapper[4787]: I0126 18:01:20.312574 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-l4nn5"] Jan 26 18:01:20 crc kubenswrapper[4787]: W0126 18:01:20.317852 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d6bb9_80e1_4976_b95c_19405c02c1e8.slice/crio-8f4d72bdb67724d21551a86ed0cfc9aaff6c9268f3770458c7b2210d0a22490c WatchSource:0}: Error finding container 8f4d72bdb67724d21551a86ed0cfc9aaff6c9268f3770458c7b2210d0a22490c: Status 404 returned error can't find the container with id 8f4d72bdb67724d21551a86ed0cfc9aaff6c9268f3770458c7b2210d0a22490c Jan 26 18:01:20 crc kubenswrapper[4787]: I0126 18:01:20.520788 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" event={"ID":"e9428aed-0b89-46f9-a84d-a905eb447a90","Type":"ContainerStarted","Data":"de52298d5d0080aa29eb221c97f6060090acd2d091826a3c628c1e6bd3357970"} Jan 26 18:01:20 crc kubenswrapper[4787]: I0126 18:01:20.524047 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" event={"ID":"279d6bb9-80e1-4976-b95c-19405c02c1e8","Type":"ContainerStarted","Data":"8f4d72bdb67724d21551a86ed0cfc9aaff6c9268f3770458c7b2210d0a22490c"} Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.220813 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-l4nn5"] Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.250886 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-nz4cl"] Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.253433 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.261445 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-nz4cl"] Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.375708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5vb\" (UniqueName: \"kubernetes.io/projected/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-kube-api-access-pj5vb\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.375815 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-config\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.375847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.476776 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-config\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.476831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.476867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj5vb\" (UniqueName: \"kubernetes.io/projected/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-kube-api-access-pj5vb\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.478211 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-config\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.478331 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.533828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj5vb\" (UniqueName: \"kubernetes.io/projected/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-kube-api-access-pj5vb\") pod \"dnsmasq-dns-c7cbb8f79-nz4cl\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.575344 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.595004 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-8jr89"] Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.669904 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-96q27"] Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.671351 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.711146 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-96q27"] Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.783225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-config\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.783638 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-dns-svc\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.783705 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbkc8\" (UniqueName: \"kubernetes.io/projected/65dd567d-a032-4faa-acb4-44c56f938651-kube-api-access-bbkc8\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.884582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-dns-svc\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.884661 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbkc8\" (UniqueName: \"kubernetes.io/projected/65dd567d-a032-4faa-acb4-44c56f938651-kube-api-access-bbkc8\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.884708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-config\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.885774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-config\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.887467 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-dns-svc\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:22 crc kubenswrapper[4787]: I0126 18:01:22.925064 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbkc8\" (UniqueName: \"kubernetes.io/projected/65dd567d-a032-4faa-acb4-44c56f938651-kube-api-access-bbkc8\") pod \"dnsmasq-dns-95f5f6995-96q27\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.046645 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.202841 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-nz4cl"] Jan 26 18:01:23 crc kubenswrapper[4787]: W0126 18:01:23.220443 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35b2bbe1_ebf3_4549_b0af_6be1e3d77fad.slice/crio-e1c433c28020132eff1b9c1ee68cb5e0f64e3f9fc3c5f5a963e65221b179f458 WatchSource:0}: Error finding container e1c433c28020132eff1b9c1ee68cb5e0f64e3f9fc3c5f5a963e65221b179f458: Status 404 returned error can't find the container with id e1c433c28020132eff1b9c1ee68cb5e0f64e3f9fc3c5f5a963e65221b179f458 Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.380097 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.381543 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.386981 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.387212 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.387382 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.387576 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lq9sd" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.388144 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.388504 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.388779 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.398589 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.486763 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-96q27"] Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493603 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493753 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493775 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb1abb80-0591-49c7-b549-969066392a5a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493816 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493846 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csxbl\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-kube-api-access-csxbl\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493862 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb1abb80-0591-49c7-b549-969066392a5a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493883 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493909 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.493934 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.553755 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-96q27" event={"ID":"65dd567d-a032-4faa-acb4-44c56f938651","Type":"ContainerStarted","Data":"bd83d6b76dad7d9a5b14b89d408ecf22f30903be7b5200e7d911eabb5a66c69f"} Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.558181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" event={"ID":"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad","Type":"ContainerStarted","Data":"e1c433c28020132eff1b9c1ee68cb5e0f64e3f9fc3c5f5a963e65221b179f458"} Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.596817 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb1abb80-0591-49c7-b549-969066392a5a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.596995 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597049 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597090 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597149 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597210 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597243 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597269 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb1abb80-0591-49c7-b549-969066392a5a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597298 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597343 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.597394 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxbl\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-kube-api-access-csxbl\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.604491 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.607013 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.607130 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.607625 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb1abb80-0591-49c7-b549-969066392a5a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.608809 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb1abb80-0591-49c7-b549-969066392a5a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.609021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.609370 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.609653 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.615272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.617270 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.620196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csxbl\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-kube-api-access-csxbl\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.640390 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.703735 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.741171 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.742420 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.745399 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.745711 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.746126 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bp7xr" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.746896 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.746914 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.747068 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.747658 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.777956 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902676 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902704 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57e65f25-43dd-4baf-b2fa-7256dcbd452d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902835 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57e65f25-43dd-4baf-b2fa-7256dcbd452d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902856 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902871 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqln\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-kube-api-access-mgqln\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:23 crc kubenswrapper[4787]: I0126 18:01:23.902894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.005839 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.005909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.005958 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.005986 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006030 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57e65f25-43dd-4baf-b2fa-7256dcbd452d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006083 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqln\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-kube-api-access-mgqln\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006146 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006191 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.006225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57e65f25-43dd-4baf-b2fa-7256dcbd452d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.007471 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.008265 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.008376 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.008555 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.009169 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.011144 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.034535 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57e65f25-43dd-4baf-b2fa-7256dcbd452d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.035145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57e65f25-43dd-4baf-b2fa-7256dcbd452d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.035683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.036252 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.041615 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.044347 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqln\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-kube-api-access-mgqln\") pod \"rabbitmq-server-0\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " pod="openstack/rabbitmq-server-0" Jan 26 18:01:24 crc kubenswrapper[4787]: I0126 18:01:24.072569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.130511 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.148867 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.149051 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.150904 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.152165 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.152449 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.155223 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-s8jwd" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.157552 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.225981 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226065 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226095 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kolla-config\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226241 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226322 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-default\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226407 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.226437 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5mw\" (UniqueName: \"kubernetes.io/projected/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kube-api-access-4m5mw\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.329669 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.329729 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.329760 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m5mw\" (UniqueName: \"kubernetes.io/projected/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kube-api-access-4m5mw\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.329800 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.329825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.330264 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.330531 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kolla-config\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.330618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.330675 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-default\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.331114 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kolla-config\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.331325 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.334037 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-default\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.334811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.343483 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.350620 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m5mw\" (UniqueName: \"kubernetes.io/projected/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kube-api-access-4m5mw\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.355802 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.358521 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " pod="openstack/openstack-galera-0" Jan 26 18:01:25 crc kubenswrapper[4787]: I0126 18:01:25.473963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.548444 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.550023 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.553576 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.555346 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-z8lq6" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.555581 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.555727 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.555860 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647473 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647530 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647561 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2cbz\" (UniqueName: \"kubernetes.io/projected/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kube-api-access-v2cbz\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647655 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647815 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647846 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647911 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.647992 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.749640 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.749707 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.750058 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2cbz\" (UniqueName: \"kubernetes.io/projected/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kube-api-access-v2cbz\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.749987 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.750252 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.751108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.751163 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.751209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.751274 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.751372 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.752065 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.752545 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.752925 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.757758 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.764722 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.767990 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.772022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2cbz\" (UniqueName: \"kubernetes.io/projected/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kube-api-access-v2cbz\") pod \"openstack-cell1-galera-0\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.876743 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.894836 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.895785 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.899349 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7lrqr" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.900084 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.900609 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.912408 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.953475 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kolla-config\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.953525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prt5p\" (UniqueName: \"kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.953546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-config-data\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.953581 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:26 crc kubenswrapper[4787]: I0126 18:01:26.953653 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.055654 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.055768 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kolla-config\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.055807 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prt5p\" (UniqueName: \"kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.055828 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-config-data\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.055860 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.056790 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-config-data\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.056790 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kolla-config\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.060572 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.061359 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.074421 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prt5p\" (UniqueName: \"kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p\") pod \"memcached-0\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " pod="openstack/memcached-0" Jan 26 18:01:27 crc kubenswrapper[4787]: I0126 18:01:27.226104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.613421 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.614828 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.616504 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jzxmk" Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.622113 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.684908 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdkp\" (UniqueName: \"kubernetes.io/projected/a1364ca0-6d34-493f-98b2-7956de27e72c-kube-api-access-2gdkp\") pod \"kube-state-metrics-0\" (UID: \"a1364ca0-6d34-493f-98b2-7956de27e72c\") " pod="openstack/kube-state-metrics-0" Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.787959 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdkp\" (UniqueName: \"kubernetes.io/projected/a1364ca0-6d34-493f-98b2-7956de27e72c-kube-api-access-2gdkp\") pod \"kube-state-metrics-0\" (UID: \"a1364ca0-6d34-493f-98b2-7956de27e72c\") " pod="openstack/kube-state-metrics-0" Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.813241 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdkp\" (UniqueName: \"kubernetes.io/projected/a1364ca0-6d34-493f-98b2-7956de27e72c-kube-api-access-2gdkp\") pod \"kube-state-metrics-0\" (UID: \"a1364ca0-6d34-493f-98b2-7956de27e72c\") " pod="openstack/kube-state-metrics-0" Jan 26 18:01:28 crc kubenswrapper[4787]: I0126 18:01:28.940507 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.207212 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5rlw8"] Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.209589 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.215060 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.215317 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4gxn4" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.215156 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.225574 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5rlw8"] Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.250878 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-ovn-controller-tls-certs\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.250922 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-log-ovn\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.250962 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.250978 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b55e012f-76df-4721-8be3-dba72f37cf33-scripts\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.251267 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run-ovn\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.251430 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-combined-ca-bundle\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.251725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sdj\" (UniqueName: \"kubernetes.io/projected/b55e012f-76df-4721-8be3-dba72f37cf33-kube-api-access-b9sdj\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.262411 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hpbh5"] Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.264362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.278142 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hpbh5"] Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9sdj\" (UniqueName: \"kubernetes.io/projected/b55e012f-76df-4721-8be3-dba72f37cf33-kube-api-access-b9sdj\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-run\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353614 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/254806cc-4007-4a34-9852-0716b123830f-scripts\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353644 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-lib\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-ovn-controller-tls-certs\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-log-ovn\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353797 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b55e012f-76df-4721-8be3-dba72f37cf33-scripts\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353814 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353833 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphmh\" (UniqueName: \"kubernetes.io/projected/254806cc-4007-4a34-9852-0716b123830f-kube-api-access-mphmh\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.353939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run-ovn\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.354005 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-log\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.354068 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-combined-ca-bundle\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.354193 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-etc-ovs\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.354463 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-log-ovn\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.354590 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run-ovn\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.354635 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.356569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b55e012f-76df-4721-8be3-dba72f37cf33-scripts\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.358369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-ovn-controller-tls-certs\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.358607 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-combined-ca-bundle\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.373821 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9sdj\" (UniqueName: \"kubernetes.io/projected/b55e012f-76df-4721-8be3-dba72f37cf33-kube-api-access-b9sdj\") pod \"ovn-controller-5rlw8\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461110 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/254806cc-4007-4a34-9852-0716b123830f-scripts\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461187 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-lib\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphmh\" (UniqueName: \"kubernetes.io/projected/254806cc-4007-4a34-9852-0716b123830f-kube-api-access-mphmh\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-log\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-etc-ovs\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-run\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.461584 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-run\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.467763 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-log\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.467794 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-etc-ovs\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.467806 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-lib\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.470821 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/254806cc-4007-4a34-9852-0716b123830f-scripts\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.484810 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphmh\" (UniqueName: \"kubernetes.io/projected/254806cc-4007-4a34-9852-0716b123830f-kube-api-access-mphmh\") pod \"ovn-controller-ovs-hpbh5\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.542766 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:32 crc kubenswrapper[4787]: I0126 18:01:32.588771 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.112758 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.114040 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.116661 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.116703 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7bqsl" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.117346 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.117416 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.117433 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.124682 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.172732 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173042 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5z4\" (UniqueName: \"kubernetes.io/projected/8125efc3-988e-4689-acea-515119c3764f-kube-api-access-7h5z4\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173250 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173528 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8125efc3-988e-4689-acea-515119c3764f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.173620 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275719 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8125efc3-988e-4689-acea-515119c3764f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275789 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5z4\" (UniqueName: \"kubernetes.io/projected/8125efc3-988e-4689-acea-515119c3764f-kube-api-access-7h5z4\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.275974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.276245 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.276895 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.277162 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8125efc3-988e-4689-acea-515119c3764f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.277428 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.280972 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.289012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.297792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5z4\" (UniqueName: \"kubernetes.io/projected/8125efc3-988e-4689-acea-515119c3764f-kube-api-access-7h5z4\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.299186 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.302572 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:33 crc kubenswrapper[4787]: I0126 18:01:33.440506 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:35 crc kubenswrapper[4787]: E0126 18:01:35.363771 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 26 18:01:35 crc kubenswrapper[4787]: E0126 18:01:35.364384 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dfh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-l4nn5_openstack(279d6bb9-80e1-4976-b95c-19405c02c1e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:01:35 crc kubenswrapper[4787]: E0126 18:01:35.365561 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" podUID="279d6bb9-80e1-4976-b95c-19405c02c1e8" Jan 26 18:01:35 crc kubenswrapper[4787]: I0126 18:01:35.773143 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: E0126 18:01:36.088442 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 26 18:01:36 crc kubenswrapper[4787]: E0126 18:01:36.088787 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5gvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-8jr89_openstack(e9428aed-0b89-46f9-a84d-a905eb447a90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:01:36 crc kubenswrapper[4787]: E0126 18:01:36.090275 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" podUID="e9428aed-0b89-46f9-a84d-a905eb447a90" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.192449 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.322785 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dfh2\" (UniqueName: \"kubernetes.io/projected/279d6bb9-80e1-4976-b95c-19405c02c1e8-kube-api-access-4dfh2\") pod \"279d6bb9-80e1-4976-b95c-19405c02c1e8\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.323092 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-config\") pod \"279d6bb9-80e1-4976-b95c-19405c02c1e8\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.323210 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-dns-svc\") pod \"279d6bb9-80e1-4976-b95c-19405c02c1e8\" (UID: \"279d6bb9-80e1-4976-b95c-19405c02c1e8\") " Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.323896 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "279d6bb9-80e1-4976-b95c-19405c02c1e8" (UID: "279d6bb9-80e1-4976-b95c-19405c02c1e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.324971 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-config" (OuterVolumeSpecName: "config") pod "279d6bb9-80e1-4976-b95c-19405c02c1e8" (UID: "279d6bb9-80e1-4976-b95c-19405c02c1e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.328999 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279d6bb9-80e1-4976-b95c-19405c02c1e8-kube-api-access-4dfh2" (OuterVolumeSpecName: "kube-api-access-4dfh2") pod "279d6bb9-80e1-4976-b95c-19405c02c1e8" (UID: "279d6bb9-80e1-4976-b95c-19405c02c1e8"). InnerVolumeSpecName "kube-api-access-4dfh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.425259 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.425305 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dfh2\" (UniqueName: \"kubernetes.io/projected/279d6bb9-80e1-4976-b95c-19405c02c1e8-kube-api-access-4dfh2\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.425323 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279d6bb9-80e1-4976-b95c-19405c02c1e8-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.523418 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.624962 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.626366 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.633391 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-78zh5" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.633794 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.633896 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.634195 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.635172 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: W0126 18:01:36.674316 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ba670d_d2d7_47aa_bc54_6da4d0e532f3.slice/crio-199d8b8e9622bcca1bb712c199085fbec6f15c848c4c4c9c0fc22a7f54cbb51e WatchSource:0}: Error finding container 199d8b8e9622bcca1bb712c199085fbec6f15c848c4c4c9c0fc22a7f54cbb51e: Status 404 returned error can't find the container with id 199d8b8e9622bcca1bb712c199085fbec6f15c848c4c4c9c0fc22a7f54cbb51e Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.676863 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.677133 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" event={"ID":"279d6bb9-80e1-4976-b95c-19405c02c1e8","Type":"ContainerDied","Data":"8f4d72bdb67724d21551a86ed0cfc9aaff6c9268f3770458c7b2210d0a22490c"} Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.677217 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-l4nn5" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.682920 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1364ca0-6d34-493f-98b2-7956de27e72c","Type":"ContainerStarted","Data":"1c2f90edace1bd4dda274dbffb8d99c279dd09ff884bfb7a5f3620a227f93b73"} Jan 26 18:01:36 crc kubenswrapper[4787]: W0126 18:01:36.691428 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55e012f_76df_4721_8be3_dba72f37cf33.slice/crio-6a394825216913acacc5a2c40fedf76af28b984380403144b044d6e21a2d2aac WatchSource:0}: Error finding container 6a394825216913acacc5a2c40fedf76af28b984380403144b044d6e21a2d2aac: Status 404 returned error can't find the container with id 6a394825216913acacc5a2c40fedf76af28b984380403144b044d6e21a2d2aac Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.692180 4787 generic.go:334] "Generic (PLEG): container finished" podID="65dd567d-a032-4faa-acb4-44c56f938651" containerID="3c500f8321ca3c8ef5120f0e76080041783ef920d55b80ece46e26daef5c2998" exitCode=0 Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.692221 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-96q27" event={"ID":"65dd567d-a032-4faa-acb4-44c56f938651","Type":"ContainerDied","Data":"3c500f8321ca3c8ef5120f0e76080041783ef920d55b80ece46e26daef5c2998"} Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.696631 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.706053 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5rlw8"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.712726 4787 generic.go:334] "Generic (PLEG): container finished" podID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerID="6aa0e8a74496dc6354ec2af98f1a8366bad74eed47cdfdf73561402d5c37a812" exitCode=0 Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.712908 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" event={"ID":"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad","Type":"ContainerDied","Data":"6aa0e8a74496dc6354ec2af98f1a8366bad74eed47cdfdf73561402d5c37a812"} Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.718023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.724432 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d3871ebb-6b25-4e36-a3a1-3e9a220768f5","Type":"ContainerStarted","Data":"d040f7addf0a6b6714329b0efa81f4294fccfb9bdfd3978b7449205fa25a0ee3"} Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733148 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733205 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733253 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733333 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddmlh\" (UniqueName: \"kubernetes.io/projected/2ac3cbab-86ff-4544-bf13-b1039585edbe-kube-api-access-ddmlh\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733439 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-config\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.733454 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.736836 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.811233 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-l4nn5"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.821897 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-l4nn5"] Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.835125 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddmlh\" (UniqueName: \"kubernetes.io/projected/2ac3cbab-86ff-4544-bf13-b1039585edbe-kube-api-access-ddmlh\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.835387 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-config\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.835488 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.835567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.835660 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.835980 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.836844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.837100 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-config\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.839622 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.839783 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.839788 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.840035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.840363 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.843763 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.847338 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.864770 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddmlh\" (UniqueName: \"kubernetes.io/projected/2ac3cbab-86ff-4544-bf13-b1039585edbe-kube-api-access-ddmlh\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.888218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:36 crc kubenswrapper[4787]: I0126 18:01:36.963021 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.010255 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.253464 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.317452 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5gvc\" (UniqueName: \"kubernetes.io/projected/e9428aed-0b89-46f9-a84d-a905eb447a90-kube-api-access-s5gvc\") pod \"e9428aed-0b89-46f9-a84d-a905eb447a90\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.317546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9428aed-0b89-46f9-a84d-a905eb447a90-config\") pod \"e9428aed-0b89-46f9-a84d-a905eb447a90\" (UID: \"e9428aed-0b89-46f9-a84d-a905eb447a90\") " Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.319556 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9428aed-0b89-46f9-a84d-a905eb447a90-config" (OuterVolumeSpecName: "config") pod "e9428aed-0b89-46f9-a84d-a905eb447a90" (UID: "e9428aed-0b89-46f9-a84d-a905eb447a90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.328313 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9428aed-0b89-46f9-a84d-a905eb447a90-kube-api-access-s5gvc" (OuterVolumeSpecName: "kube-api-access-s5gvc") pod "e9428aed-0b89-46f9-a84d-a905eb447a90" (UID: "e9428aed-0b89-46f9-a84d-a905eb447a90"). InnerVolumeSpecName "kube-api-access-s5gvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.419642 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9428aed-0b89-46f9-a84d-a905eb447a90-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.419678 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5gvc\" (UniqueName: \"kubernetes.io/projected/e9428aed-0b89-46f9-a84d-a905eb447a90-kube-api-access-s5gvc\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.602820 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="279d6bb9-80e1-4976-b95c-19405c02c1e8" path="/var/lib/kubelet/pods/279d6bb9-80e1-4976-b95c-19405c02c1e8/volumes" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.603466 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hpbh5"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.694844 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.731803 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2j4vw"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.734474 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.738053 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.746697 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2j4vw"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.760901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" event={"ID":"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad","Type":"ContainerStarted","Data":"1b82ba61ff1491e9d8f749e0635dd324f0815c0a616d3f2a0772cc69d761fb51"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.762608 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.764354 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb1abb80-0591-49c7-b549-969066392a5a","Type":"ContainerStarted","Data":"8d9e1d13dba7f54258e89da7a97d5d2f915dcdc1adce2a1e9d02e8b75ea65ea6"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.775556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28ba670d-d2d7-47aa-bc54-6da4d0e532f3","Type":"ContainerStarted","Data":"199d8b8e9622bcca1bb712c199085fbec6f15c848c4c4c9c0fc22a7f54cbb51e"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.791707 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" podStartSLOduration=2.945613787 podStartE2EDuration="15.791688834s" podCreationTimestamp="2026-01-26 18:01:22 +0000 UTC" firstStartedPulling="2026-01-26 18:01:23.22347686 +0000 UTC m=+1051.930612993" lastFinishedPulling="2026-01-26 18:01:36.069551907 +0000 UTC m=+1064.776688040" observedRunningTime="2026-01-26 18:01:37.781081447 +0000 UTC m=+1066.488217580" watchObservedRunningTime="2026-01-26 18:01:37.791688834 +0000 UTC m=+1066.498824967" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.803914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-96q27" event={"ID":"65dd567d-a032-4faa-acb4-44c56f938651","Type":"ContainerStarted","Data":"c5378c68a1f01857168a7890649305e5cf6d6d2e08b489c6ff6ca65d3e1890c7"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.804017 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.805656 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" event={"ID":"e9428aed-0b89-46f9-a84d-a905eb447a90","Type":"ContainerDied","Data":"de52298d5d0080aa29eb221c97f6060090acd2d091826a3c628c1e6bd3357970"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.805712 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-8jr89" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.808128 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8125efc3-988e-4689-acea-515119c3764f","Type":"ContainerStarted","Data":"4dd4dbe51885917f88d5db2e6a4328592c065999c2e440e6f7522ec870225f26"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.820258 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57e65f25-43dd-4baf-b2fa-7256dcbd452d","Type":"ContainerStarted","Data":"8ddf2f9cc6fdc0aed0b4ecb985b107a7105c3dc393c8836e8e36c2f918115c9d"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.828285 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtggw\" (UniqueName: \"kubernetes.io/projected/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-kube-api-access-rtggw\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.828352 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovn-rundir\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.828420 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-config\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.828548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-combined-ca-bundle\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.828698 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovs-rundir\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.828754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.832986 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-96q27" podStartSLOduration=3.252212534 podStartE2EDuration="15.832966744s" podCreationTimestamp="2026-01-26 18:01:22 +0000 UTC" firstStartedPulling="2026-01-26 18:01:23.496704838 +0000 UTC m=+1052.203840971" lastFinishedPulling="2026-01-26 18:01:36.077459048 +0000 UTC m=+1064.784595181" observedRunningTime="2026-01-26 18:01:37.824909228 +0000 UTC m=+1066.532045361" watchObservedRunningTime="2026-01-26 18:01:37.832966744 +0000 UTC m=+1066.540102877" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.836551 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8" event={"ID":"b55e012f-76df-4721-8be3-dba72f37cf33","Type":"ContainerStarted","Data":"6a394825216913acacc5a2c40fedf76af28b984380403144b044d6e21a2d2aac"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.851797 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ef9888f-835c-40ea-9d3b-c7084cbffd65","Type":"ContainerStarted","Data":"1711b2bbcf17b91b0159c6b600a6f474a2f74caef1cd3917c38e44ee5a7ad3b4"} Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.875075 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-8jr89"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.882166 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-8jr89"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.902852 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-96q27"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931054 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-config\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-combined-ca-bundle\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovs-rundir\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931334 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtggw\" (UniqueName: \"kubernetes.io/projected/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-kube-api-access-rtggw\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovn-rundir\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.931741 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovn-rundir\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.932151 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovs-rundir\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.932746 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-config\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.939066 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-combined-ca-bundle\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.939127 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-cfcjq"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.941718 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.947258 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.949374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.963810 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cfcjq"] Jan 26 18:01:37 crc kubenswrapper[4787]: I0126 18:01:37.969011 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtggw\" (UniqueName: \"kubernetes.io/projected/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-kube-api-access-rtggw\") pod \"ovn-controller-metrics-2j4vw\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.032829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-dns-svc\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.032877 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.032897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-config\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.032928 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9km6\" (UniqueName: \"kubernetes.io/projected/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-kube-api-access-z9km6\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.063992 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.080382 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-nz4cl"] Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.110301 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-hcxwt"] Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.111556 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.118435 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.129470 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-hcxwt"] Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.135845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9km6\" (UniqueName: \"kubernetes.io/projected/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-kube-api-access-z9km6\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.136181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-dns-svc\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.136206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.136262 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-config\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.137651 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-config\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.137822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-dns-svc\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.138293 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.153437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9km6\" (UniqueName: \"kubernetes.io/projected/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-kube-api-access-z9km6\") pod \"dnsmasq-dns-7878659675-cfcjq\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.237999 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.238046 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-config\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.238106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2n4x\" (UniqueName: \"kubernetes.io/projected/bd9124dd-78e1-4006-b80e-1407323003b1-kube-api-access-c2n4x\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.238206 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-dns-svc\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.238251 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.268458 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.339787 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.340178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-config\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.340232 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2n4x\" (UniqueName: \"kubernetes.io/projected/bd9124dd-78e1-4006-b80e-1407323003b1-kube-api-access-c2n4x\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.340312 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-dns-svc\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.340366 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.341970 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-dns-svc\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.342105 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.343509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-config\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.344260 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.359611 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2n4x\" (UniqueName: \"kubernetes.io/projected/bd9124dd-78e1-4006-b80e-1407323003b1-kube-api-access-c2n4x\") pod \"dnsmasq-dns-586b989cdc-hcxwt\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.436868 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.859645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2ac3cbab-86ff-4544-bf13-b1039585edbe","Type":"ContainerStarted","Data":"7acef36f7fa4c435d02da4ded69dacd227ecd1d664592ba05f6e29cafd408791"} Jan 26 18:01:38 crc kubenswrapper[4787]: I0126 18:01:38.860889 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerStarted","Data":"0b23964af3115e559b81d945ed7ff24477f6fbce806bcee883546e58487ca21b"} Jan 26 18:01:39 crc kubenswrapper[4787]: I0126 18:01:39.598619 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9428aed-0b89-46f9-a84d-a905eb447a90" path="/var/lib/kubelet/pods/e9428aed-0b89-46f9-a84d-a905eb447a90/volumes" Jan 26 18:01:39 crc kubenswrapper[4787]: I0126 18:01:39.869324 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-96q27" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="dnsmasq-dns" containerID="cri-o://c5378c68a1f01857168a7890649305e5cf6d6d2e08b489c6ff6ca65d3e1890c7" gracePeriod=10 Jan 26 18:01:39 crc kubenswrapper[4787]: I0126 18:01:39.869485 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="dnsmasq-dns" containerID="cri-o://1b82ba61ff1491e9d8f749e0635dd324f0815c0a616d3f2a0772cc69d761fb51" gracePeriod=10 Jan 26 18:01:40 crc kubenswrapper[4787]: I0126 18:01:40.878684 4787 generic.go:334] "Generic (PLEG): container finished" podID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerID="1b82ba61ff1491e9d8f749e0635dd324f0815c0a616d3f2a0772cc69d761fb51" exitCode=0 Jan 26 18:01:40 crc kubenswrapper[4787]: I0126 18:01:40.878775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" event={"ID":"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad","Type":"ContainerDied","Data":"1b82ba61ff1491e9d8f749e0635dd324f0815c0a616d3f2a0772cc69d761fb51"} Jan 26 18:01:40 crc kubenswrapper[4787]: I0126 18:01:40.880521 4787 generic.go:334] "Generic (PLEG): container finished" podID="65dd567d-a032-4faa-acb4-44c56f938651" containerID="c5378c68a1f01857168a7890649305e5cf6d6d2e08b489c6ff6ca65d3e1890c7" exitCode=0 Jan 26 18:01:40 crc kubenswrapper[4787]: I0126 18:01:40.880575 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-96q27" event={"ID":"65dd567d-a032-4faa-acb4-44c56f938651","Type":"ContainerDied","Data":"c5378c68a1f01857168a7890649305e5cf6d6d2e08b489c6ff6ca65d3e1890c7"} Jan 26 18:01:42 crc kubenswrapper[4787]: I0126 18:01:42.578914 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.98:5353: connect: connection refused" Jan 26 18:01:43 crc kubenswrapper[4787]: I0126 18:01:43.049244 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-95f5f6995-96q27" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: connect: connection refused" Jan 26 18:01:43 crc kubenswrapper[4787]: I0126 18:01:43.868618 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-hcxwt"] Jan 26 18:01:44 crc kubenswrapper[4787]: I0126 18:01:44.920315 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" event={"ID":"bd9124dd-78e1-4006-b80e-1407323003b1","Type":"ContainerStarted","Data":"434d50744e4a358eff67056bd9558836f97dbb6ae14e28f5ba2f3b65050b9a7a"} Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.275613 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.278858 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.371398 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-config\") pod \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.371493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbkc8\" (UniqueName: \"kubernetes.io/projected/65dd567d-a032-4faa-acb4-44c56f938651-kube-api-access-bbkc8\") pod \"65dd567d-a032-4faa-acb4-44c56f938651\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.371572 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj5vb\" (UniqueName: \"kubernetes.io/projected/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-kube-api-access-pj5vb\") pod \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.371596 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-config\") pod \"65dd567d-a032-4faa-acb4-44c56f938651\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.371684 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-dns-svc\") pod \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\" (UID: \"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad\") " Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.371723 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-dns-svc\") pod \"65dd567d-a032-4faa-acb4-44c56f938651\" (UID: \"65dd567d-a032-4faa-acb4-44c56f938651\") " Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.375581 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-kube-api-access-pj5vb" (OuterVolumeSpecName: "kube-api-access-pj5vb") pod "35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" (UID: "35b2bbe1-ebf3-4549-b0af-6be1e3d77fad"). InnerVolumeSpecName "kube-api-access-pj5vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.378555 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65dd567d-a032-4faa-acb4-44c56f938651-kube-api-access-bbkc8" (OuterVolumeSpecName: "kube-api-access-bbkc8") pod "65dd567d-a032-4faa-acb4-44c56f938651" (UID: "65dd567d-a032-4faa-acb4-44c56f938651"). InnerVolumeSpecName "kube-api-access-bbkc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.408830 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-config" (OuterVolumeSpecName: "config") pod "35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" (UID: "35b2bbe1-ebf3-4549-b0af-6be1e3d77fad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.412792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" (UID: "35b2bbe1-ebf3-4549-b0af-6be1e3d77fad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.416937 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-config" (OuterVolumeSpecName: "config") pod "65dd567d-a032-4faa-acb4-44c56f938651" (UID: "65dd567d-a032-4faa-acb4-44c56f938651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.419078 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65dd567d-a032-4faa-acb4-44c56f938651" (UID: "65dd567d-a032-4faa-acb4-44c56f938651"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.473408 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.473440 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbkc8\" (UniqueName: \"kubernetes.io/projected/65dd567d-a032-4faa-acb4-44c56f938651-kube-api-access-bbkc8\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.473452 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj5vb\" (UniqueName: \"kubernetes.io/projected/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-kube-api-access-pj5vb\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.473461 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.473469 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.473476 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65dd567d-a032-4faa-acb4-44c56f938651-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.928858 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-96q27" event={"ID":"65dd567d-a032-4faa-acb4-44c56f938651","Type":"ContainerDied","Data":"bd83d6b76dad7d9a5b14b89d408ecf22f30903be7b5200e7d911eabb5a66c69f"} Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.928897 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-96q27" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.928914 4787 scope.go:117] "RemoveContainer" containerID="c5378c68a1f01857168a7890649305e5cf6d6d2e08b489c6ff6ca65d3e1890c7" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.931849 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" event={"ID":"35b2bbe1-ebf3-4549-b0af-6be1e3d77fad","Type":"ContainerDied","Data":"e1c433c28020132eff1b9c1ee68cb5e0f64e3f9fc3c5f5a963e65221b179f458"} Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.931925 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-nz4cl" Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.953919 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-96q27"] Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.964777 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-96q27"] Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.971091 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-nz4cl"] Jan 26 18:01:45 crc kubenswrapper[4787]: I0126 18:01:45.978033 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-nz4cl"] Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.293191 4787 scope.go:117] "RemoveContainer" containerID="3c500f8321ca3c8ef5120f0e76080041783ef920d55b80ece46e26daef5c2998" Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.489036 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2j4vw"] Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.518372 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cfcjq"] Jan 26 18:01:46 crc kubenswrapper[4787]: W0126 18:01:46.755687 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8be55d_39c3_4ede_aff3_62890aa7c0e5.slice/crio-fe04cfaae07824886ccc62159a830aa87b1ba061010afce352785d069624b629 WatchSource:0}: Error finding container fe04cfaae07824886ccc62159a830aa87b1ba061010afce352785d069624b629: Status 404 returned error can't find the container with id fe04cfaae07824886ccc62159a830aa87b1ba061010afce352785d069624b629 Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.822730 4787 scope.go:117] "RemoveContainer" containerID="1b82ba61ff1491e9d8f749e0635dd324f0815c0a616d3f2a0772cc69d761fb51" Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.935816 4787 scope.go:117] "RemoveContainer" containerID="6aa0e8a74496dc6354ec2af98f1a8366bad74eed47cdfdf73561402d5c37a812" Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.939274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2j4vw" event={"ID":"0f8be55d-39c3-4ede-aff3-62890aa7c0e5","Type":"ContainerStarted","Data":"fe04cfaae07824886ccc62159a830aa87b1ba061010afce352785d069624b629"} Jan 26 18:01:46 crc kubenswrapper[4787]: I0126 18:01:46.941492 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cfcjq" event={"ID":"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a","Type":"ContainerStarted","Data":"a82801fda8af42f2d074793d65d8475b36b722228187aca734c6f8c436799669"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.606196 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" path="/var/lib/kubelet/pods/35b2bbe1-ebf3-4549-b0af-6be1e3d77fad/volumes" Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.607376 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65dd567d-a032-4faa-acb4-44c56f938651" path="/var/lib/kubelet/pods/65dd567d-a032-4faa-acb4-44c56f938651/volumes" Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.950245 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ef9888f-835c-40ea-9d3b-c7084cbffd65","Type":"ContainerStarted","Data":"64bceda317e097fc87337cc8e59ab97a25c19ff22286cd1844866b2f8f52e65a"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.961372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8125efc3-988e-4689-acea-515119c3764f","Type":"ContainerStarted","Data":"dce4e5453293ee450ee0ab6aaf6225e5f156c6dba0c7e6bdcccbe6c15fd75397"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.962723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d3871ebb-6b25-4e36-a3a1-3e9a220768f5","Type":"ContainerStarted","Data":"27457a194befcc8699c0109b2497a7ad92fb469cd60a8cf8cacda2c5dfed2719"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.963913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerStarted","Data":"111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.965961 4787 generic.go:334] "Generic (PLEG): container finished" podID="bd9124dd-78e1-4006-b80e-1407323003b1" containerID="0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281" exitCode=0 Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.966092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" event={"ID":"bd9124dd-78e1-4006-b80e-1407323003b1","Type":"ContainerDied","Data":"0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.970614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28ba670d-d2d7-47aa-bc54-6da4d0e532f3","Type":"ContainerStarted","Data":"cacee73bf45c04d60820e3ca12199d2fcac4ad380185a4c29165ce46e0b6bc52"} Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.971158 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.972716 4787 generic.go:334] "Generic (PLEG): container finished" podID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerID="297eff4d2da71aae655406a41aa21ac6b85e2fde4a62c8e107a6410ad2065ddc" exitCode=0 Jan 26 18:01:47 crc kubenswrapper[4787]: I0126 18:01:47.972822 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cfcjq" event={"ID":"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a","Type":"ContainerDied","Data":"297eff4d2da71aae655406a41aa21ac6b85e2fde4a62c8e107a6410ad2065ddc"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.096385 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.564744205 podStartE2EDuration="22.096367095s" podCreationTimestamp="2026-01-26 18:01:26 +0000 UTC" firstStartedPulling="2026-01-26 18:01:36.685623101 +0000 UTC m=+1065.392759234" lastFinishedPulling="2026-01-26 18:01:45.217245991 +0000 UTC m=+1073.924382124" observedRunningTime="2026-01-26 18:01:48.092615854 +0000 UTC m=+1076.799751997" watchObservedRunningTime="2026-01-26 18:01:48.096367095 +0000 UTC m=+1076.803503228" Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.985686 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2ac3cbab-86ff-4544-bf13-b1039585edbe","Type":"ContainerStarted","Data":"0b733d1c4b7c91d6d0d94b6e6ebb1c3927dd7b9c58ec3f54aecf6573b8c5b28e"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.988159 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cfcjq" event={"ID":"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a","Type":"ContainerStarted","Data":"a211cdebeccfe2b25efa27cef09763cb4a897cb840897bd8d224d56c785dc212"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.988310 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.989994 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1364ca0-6d34-493f-98b2-7956de27e72c","Type":"ContainerStarted","Data":"0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.990131 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.992398 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57e65f25-43dd-4baf-b2fa-7256dcbd452d","Type":"ContainerStarted","Data":"2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.994063 4787 generic.go:334] "Generic (PLEG): container finished" podID="254806cc-4007-4a34-9852-0716b123830f" containerID="111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b" exitCode=0 Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.994101 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerDied","Data":"111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.995745 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb1abb80-0591-49c7-b549-969066392a5a","Type":"ContainerStarted","Data":"48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d"} Jan 26 18:01:48 crc kubenswrapper[4787]: I0126 18:01:48.998348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8" event={"ID":"b55e012f-76df-4721-8be3-dba72f37cf33","Type":"ContainerStarted","Data":"0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e"} Jan 26 18:01:49 crc kubenswrapper[4787]: I0126 18:01:49.019008 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878659675-cfcjq" podStartSLOduration=12.018992754 podStartE2EDuration="12.018992754s" podCreationTimestamp="2026-01-26 18:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:01:49.006976944 +0000 UTC m=+1077.714113077" watchObservedRunningTime="2026-01-26 18:01:49.018992754 +0000 UTC m=+1077.726128887" Jan 26 18:01:49 crc kubenswrapper[4787]: I0126 18:01:49.029686 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.673465354 podStartE2EDuration="21.029668253s" podCreationTimestamp="2026-01-26 18:01:28 +0000 UTC" firstStartedPulling="2026-01-26 18:01:36.524651341 +0000 UTC m=+1065.231787474" lastFinishedPulling="2026-01-26 18:01:46.88085424 +0000 UTC m=+1075.587990373" observedRunningTime="2026-01-26 18:01:49.022617133 +0000 UTC m=+1077.729753266" watchObservedRunningTime="2026-01-26 18:01:49.029668253 +0000 UTC m=+1077.736804386" Jan 26 18:01:49 crc kubenswrapper[4787]: I0126 18:01:49.122773 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5rlw8" podStartSLOduration=7.553541593 podStartE2EDuration="17.122755098s" podCreationTimestamp="2026-01-26 18:01:32 +0000 UTC" firstStartedPulling="2026-01-26 18:01:36.697616391 +0000 UTC m=+1065.404752534" lastFinishedPulling="2026-01-26 18:01:46.266829906 +0000 UTC m=+1074.973966039" observedRunningTime="2026-01-26 18:01:49.114120199 +0000 UTC m=+1077.821256342" watchObservedRunningTime="2026-01-26 18:01:49.122755098 +0000 UTC m=+1077.829891231" Jan 26 18:01:50 crc kubenswrapper[4787]: I0126 18:01:50.007189 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5rlw8" Jan 26 18:01:52 crc kubenswrapper[4787]: I0126 18:01:52.245306 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 18:01:53 crc kubenswrapper[4787]: I0126 18:01:53.037501 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" event={"ID":"bd9124dd-78e1-4006-b80e-1407323003b1","Type":"ContainerStarted","Data":"a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a"} Jan 26 18:01:53 crc kubenswrapper[4787]: I0126 18:01:53.039158 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:53 crc kubenswrapper[4787]: I0126 18:01:53.062709 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" podStartSLOduration=15.062692549 podStartE2EDuration="15.062692549s" podCreationTimestamp="2026-01-26 18:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:01:53.059533313 +0000 UTC m=+1081.766669466" watchObservedRunningTime="2026-01-26 18:01:53.062692549 +0000 UTC m=+1081.769828682" Jan 26 18:01:53 crc kubenswrapper[4787]: I0126 18:01:53.273797 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.045248 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8125efc3-988e-4689-acea-515119c3764f","Type":"ContainerStarted","Data":"c3b83431cb1aae7c46fbdf0e58b9cfa69783cc9a87a4bf1097cc0a7b9aad22e8"} Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.048932 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerStarted","Data":"1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039"} Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.048990 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerStarted","Data":"71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f"} Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.049400 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.051338 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2ac3cbab-86ff-4544-bf13-b1039585edbe","Type":"ContainerStarted","Data":"0a8518325f6afe38dc22fc027b5b60a5dd918a78869776559a7fc2103c3d51df"} Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.053157 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2j4vw" event={"ID":"0f8be55d-39c3-4ede-aff3-62890aa7c0e5","Type":"ContainerStarted","Data":"9304d3acaa85f83d020a39079016b3b206affa7cd56bed13c15d8a89719d6e20"} Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.068662 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.730088222 podStartE2EDuration="22.068642558s" podCreationTimestamp="2026-01-26 18:01:32 +0000 UTC" firstStartedPulling="2026-01-26 18:01:37.026569699 +0000 UTC m=+1065.733705832" lastFinishedPulling="2026-01-26 18:01:53.365124035 +0000 UTC m=+1082.072260168" observedRunningTime="2026-01-26 18:01:54.06293017 +0000 UTC m=+1082.770066303" watchObservedRunningTime="2026-01-26 18:01:54.068642558 +0000 UTC m=+1082.775778691" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.090597 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hpbh5" podStartSLOduration=13.796452151 podStartE2EDuration="22.090568018s" podCreationTimestamp="2026-01-26 18:01:32 +0000 UTC" firstStartedPulling="2026-01-26 18:01:37.999059527 +0000 UTC m=+1066.706195660" lastFinishedPulling="2026-01-26 18:01:46.293175394 +0000 UTC m=+1075.000311527" observedRunningTime="2026-01-26 18:01:54.087671418 +0000 UTC m=+1082.794807551" watchObservedRunningTime="2026-01-26 18:01:54.090568018 +0000 UTC m=+1082.797704151" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.116566 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2j4vw" podStartSLOduration=10.527178917 podStartE2EDuration="17.116545148s" podCreationTimestamp="2026-01-26 18:01:37 +0000 UTC" firstStartedPulling="2026-01-26 18:01:46.757493412 +0000 UTC m=+1075.464629545" lastFinishedPulling="2026-01-26 18:01:53.346859643 +0000 UTC m=+1082.053995776" observedRunningTime="2026-01-26 18:01:54.110244855 +0000 UTC m=+1082.817380998" watchObservedRunningTime="2026-01-26 18:01:54.116545148 +0000 UTC m=+1082.823681281" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.136070 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.753448842 podStartE2EDuration="19.136049741s" podCreationTimestamp="2026-01-26 18:01:35 +0000 UTC" firstStartedPulling="2026-01-26 18:01:37.967550044 +0000 UTC m=+1066.674686187" lastFinishedPulling="2026-01-26 18:01:53.350150943 +0000 UTC m=+1082.057287086" observedRunningTime="2026-01-26 18:01:54.128702973 +0000 UTC m=+1082.835839106" watchObservedRunningTime="2026-01-26 18:01:54.136049741 +0000 UTC m=+1082.843185874" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.441314 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.494155 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:54 crc kubenswrapper[4787]: I0126 18:01:54.963203 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.008153 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.060143 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.060212 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.060228 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.095147 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.098371 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478125 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:01:55 crc kubenswrapper[4787]: E0126 18:01:55.478460 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="dnsmasq-dns" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478474 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="dnsmasq-dns" Jan 26 18:01:55 crc kubenswrapper[4787]: E0126 18:01:55.478484 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="dnsmasq-dns" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478490 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="dnsmasq-dns" Jan 26 18:01:55 crc kubenswrapper[4787]: E0126 18:01:55.478514 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="init" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478521 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="init" Jan 26 18:01:55 crc kubenswrapper[4787]: E0126 18:01:55.478528 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="init" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478534 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="init" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478702 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b2bbe1-ebf3-4549-b0af-6be1e3d77fad" containerName="dnsmasq-dns" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.478718 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="65dd567d-a032-4faa-acb4-44c56f938651" containerName="dnsmasq-dns" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.479584 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.485217 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.485295 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.485499 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.485601 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-24h8d" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.494196 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.579555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-config\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.579624 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-scripts\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.579648 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.579701 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208dca76-0c20-4fd9-a685-76144777c48c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.579742 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.579851 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.580011 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxn77\" (UniqueName: \"kubernetes.io/projected/208dca76-0c20-4fd9-a685-76144777c48c-kube-api-access-cxn77\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.681295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-config\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.681386 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-scripts\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.681439 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.682675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-scripts\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.682693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-config\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.682812 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208dca76-0c20-4fd9-a685-76144777c48c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.683215 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.683252 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.683307 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxn77\" (UniqueName: \"kubernetes.io/projected/208dca76-0c20-4fd9-a685-76144777c48c-kube-api-access-cxn77\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.683152 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208dca76-0c20-4fd9-a685-76144777c48c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.689063 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.690837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.693143 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.702962 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxn77\" (UniqueName: \"kubernetes.io/projected/208dca76-0c20-4fd9-a685-76144777c48c-kube-api-access-cxn77\") pod \"ovn-northd-0\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " pod="openstack/ovn-northd-0" Jan 26 18:01:55 crc kubenswrapper[4787]: I0126 18:01:55.805678 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 18:01:56 crc kubenswrapper[4787]: I0126 18:01:56.299869 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:01:56 crc kubenswrapper[4787]: W0126 18:01:56.309118 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod208dca76_0c20_4fd9_a685_76144777c48c.slice/crio-dca345c6494966284813e9102649c174b28820808082162b5a1c018cf28a58d5 WatchSource:0}: Error finding container dca345c6494966284813e9102649c174b28820808082162b5a1c018cf28a58d5: Status 404 returned error can't find the container with id dca345c6494966284813e9102649c174b28820808082162b5a1c018cf28a58d5 Jan 26 18:01:57 crc kubenswrapper[4787]: I0126 18:01:57.076621 4787 generic.go:334] "Generic (PLEG): container finished" podID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerID="64bceda317e097fc87337cc8e59ab97a25c19ff22286cd1844866b2f8f52e65a" exitCode=0 Jan 26 18:01:57 crc kubenswrapper[4787]: I0126 18:01:57.076682 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ef9888f-835c-40ea-9d3b-c7084cbffd65","Type":"ContainerDied","Data":"64bceda317e097fc87337cc8e59ab97a25c19ff22286cd1844866b2f8f52e65a"} Jan 26 18:01:57 crc kubenswrapper[4787]: I0126 18:01:57.079832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"208dca76-0c20-4fd9-a685-76144777c48c","Type":"ContainerStarted","Data":"dca345c6494966284813e9102649c174b28820808082162b5a1c018cf28a58d5"} Jan 26 18:01:57 crc kubenswrapper[4787]: I0126 18:01:57.081366 4787 generic.go:334] "Generic (PLEG): container finished" podID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerID="27457a194befcc8699c0109b2497a7ad92fb469cd60a8cf8cacda2c5dfed2719" exitCode=0 Jan 26 18:01:57 crc kubenswrapper[4787]: I0126 18:01:57.081458 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d3871ebb-6b25-4e36-a3a1-3e9a220768f5","Type":"ContainerDied","Data":"27457a194befcc8699c0109b2497a7ad92fb469cd60a8cf8cacda2c5dfed2719"} Jan 26 18:01:58 crc kubenswrapper[4787]: I0126 18:01:58.440053 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:01:58 crc kubenswrapper[4787]: I0126 18:01:58.515715 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cfcjq"] Jan 26 18:01:58 crc kubenswrapper[4787]: I0126 18:01:58.515999 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878659675-cfcjq" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerName="dnsmasq-dns" containerID="cri-o://a211cdebeccfe2b25efa27cef09763cb4a897cb840897bd8d224d56c785dc212" gracePeriod=10 Jan 26 18:01:58 crc kubenswrapper[4787]: I0126 18:01:58.998546 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.096619 4787 generic.go:334] "Generic (PLEG): container finished" podID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerID="a211cdebeccfe2b25efa27cef09763cb4a897cb840897bd8d224d56c785dc212" exitCode=0 Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.096690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cfcjq" event={"ID":"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a","Type":"ContainerDied","Data":"a211cdebeccfe2b25efa27cef09763cb4a897cb840897bd8d224d56c785dc212"} Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.158565 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ffbkk"] Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.160201 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.184852 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ffbkk"] Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.244569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.244614 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.244647 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-config\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.244905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.245044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqlsv\" (UniqueName: \"kubernetes.io/projected/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-kube-api-access-dqlsv\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.346617 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.346683 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-config\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.346730 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.346759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqlsv\" (UniqueName: \"kubernetes.io/projected/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-kube-api-access-dqlsv\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.346827 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.347678 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.348204 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.348701 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-config\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.349326 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.383079 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqlsv\" (UniqueName: \"kubernetes.io/projected/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-kube-api-access-dqlsv\") pod \"dnsmasq-dns-67fdf7998c-ffbkk\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.484278 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:01:59 crc kubenswrapper[4787]: I0126 18:01:59.966124 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ffbkk"] Jan 26 18:01:59 crc kubenswrapper[4787]: W0126 18:01:59.970142 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455b92c3_f9ce_4bdf_9472_b41e3f4a1443.slice/crio-18f63c0ae886993d5f1f13a6177fa9ac320257ba39d786eb54fb71e52eb15578 WatchSource:0}: Error finding container 18f63c0ae886993d5f1f13a6177fa9ac320257ba39d786eb54fb71e52eb15578: Status 404 returned error can't find the container with id 18f63c0ae886993d5f1f13a6177fa9ac320257ba39d786eb54fb71e52eb15578 Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.108423 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" event={"ID":"455b92c3-f9ce-4bdf-9472-b41e3f4a1443","Type":"ContainerStarted","Data":"18f63c0ae886993d5f1f13a6177fa9ac320257ba39d786eb54fb71e52eb15578"} Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.231856 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.237981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.240000 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.240514 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.240607 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.240885 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-bv8wn" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.260802 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.363828 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba57dd7-3de5-4e62-817d-4fc2c295ddee-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.363875 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.363916 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-cache\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.363936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.363976 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-lock\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.363996 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lk9\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-kube-api-access-84lk9\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466059 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-cache\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466422 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-lock\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lk9\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-kube-api-access-84lk9\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba57dd7-3de5-4e62-817d-4fc2c295ddee-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-cache\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: E0126 18:02:00.466765 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:02:00 crc kubenswrapper[4787]: E0126 18:02:00.466778 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.466794 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: E0126 18:02:00.466817 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift podName:fba57dd7-3de5-4e62-817d-4fc2c295ddee nodeName:}" failed. No retries permitted until 2026-01-26 18:02:00.966801157 +0000 UTC m=+1089.673937290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift") pod "swift-storage-0" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee") : configmap "swift-ring-files" not found Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.467168 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-lock\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.472133 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba57dd7-3de5-4e62-817d-4fc2c295ddee-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.486513 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lk9\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-kube-api-access-84lk9\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.488936 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: I0126 18:02:00.974471 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:00 crc kubenswrapper[4787]: E0126 18:02:00.974673 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:02:00 crc kubenswrapper[4787]: E0126 18:02:00.974709 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:02:00 crc kubenswrapper[4787]: E0126 18:02:00.974801 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift podName:fba57dd7-3de5-4e62-817d-4fc2c295ddee nodeName:}" failed. No retries permitted until 2026-01-26 18:02:01.974781402 +0000 UTC m=+1090.681917545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift") pod "swift-storage-0" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee") : configmap "swift-ring-files" not found Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.840112 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.890804 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-ovsdbserver-nb\") pod \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.890859 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-dns-svc\") pod \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.891015 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9km6\" (UniqueName: \"kubernetes.io/projected/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-kube-api-access-z9km6\") pod \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.891042 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-config\") pod \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\" (UID: \"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a\") " Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.900245 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-kube-api-access-z9km6" (OuterVolumeSpecName: "kube-api-access-z9km6") pod "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" (UID: "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a"). InnerVolumeSpecName "kube-api-access-z9km6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.933771 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-config" (OuterVolumeSpecName: "config") pod "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" (UID: "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.939988 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" (UID: "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.946551 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" (UID: "8efcb7e9-652e-45a5-b9cf-cc8e68fed70a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.993357 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.993435 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9km6\" (UniqueName: \"kubernetes.io/projected/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-kube-api-access-z9km6\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.993447 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.993456 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:01 crc kubenswrapper[4787]: I0126 18:02:01.993472 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:01 crc kubenswrapper[4787]: E0126 18:02:01.993575 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:02:01 crc kubenswrapper[4787]: E0126 18:02:01.993603 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:02:01 crc kubenswrapper[4787]: E0126 18:02:01.993663 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift podName:fba57dd7-3de5-4e62-817d-4fc2c295ddee nodeName:}" failed. No retries permitted until 2026-01-26 18:02:03.993646043 +0000 UTC m=+1092.700782176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift") pod "swift-storage-0" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee") : configmap "swift-ring-files" not found Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.126608 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cfcjq" event={"ID":"8efcb7e9-652e-45a5-b9cf-cc8e68fed70a","Type":"ContainerDied","Data":"a82801fda8af42f2d074793d65d8475b36b722228187aca734c6f8c436799669"} Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.126664 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cfcjq" Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.126682 4787 scope.go:117] "RemoveContainer" containerID="a211cdebeccfe2b25efa27cef09763cb4a897cb840897bd8d224d56c785dc212" Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.130493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ef9888f-835c-40ea-9d3b-c7084cbffd65","Type":"ContainerStarted","Data":"7378f982f6cae9a52f12316a36e90af7a418c481c9a556fe677e423d6b63de51"} Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.136439 4787 generic.go:334] "Generic (PLEG): container finished" podID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerID="b6dfd0da4e4d4fb34e69e465d03475d48a1d41b876029450200f087dd7d882c5" exitCode=0 Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.136492 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" event={"ID":"455b92c3-f9ce-4bdf-9472-b41e3f4a1443","Type":"ContainerDied","Data":"b6dfd0da4e4d4fb34e69e465d03475d48a1d41b876029450200f087dd7d882c5"} Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.140393 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d3871ebb-6b25-4e36-a3a1-3e9a220768f5","Type":"ContainerStarted","Data":"b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe"} Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.162916 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.82289969 podStartE2EDuration="37.162896693s" podCreationTimestamp="2026-01-26 18:01:25 +0000 UTC" firstStartedPulling="2026-01-26 18:01:36.712139612 +0000 UTC m=+1065.419275745" lastFinishedPulling="2026-01-26 18:01:46.052136625 +0000 UTC m=+1074.759272748" observedRunningTime="2026-01-26 18:02:02.155480923 +0000 UTC m=+1090.862617066" watchObservedRunningTime="2026-01-26 18:02:02.162896693 +0000 UTC m=+1090.870032826" Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.181164 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cfcjq"] Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.186058 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cfcjq"] Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.190987 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.929283388 podStartE2EDuration="38.190939283s" podCreationTimestamp="2026-01-26 18:01:24 +0000 UTC" firstStartedPulling="2026-01-26 18:01:35.991742262 +0000 UTC m=+1064.698878405" lastFinishedPulling="2026-01-26 18:01:45.253398167 +0000 UTC m=+1073.960534300" observedRunningTime="2026-01-26 18:02:02.189921898 +0000 UTC m=+1090.897058051" watchObservedRunningTime="2026-01-26 18:02:02.190939283 +0000 UTC m=+1090.898075416" Jan 26 18:02:02 crc kubenswrapper[4787]: I0126 18:02:02.312755 4787 scope.go:117] "RemoveContainer" containerID="297eff4d2da71aae655406a41aa21ac6b85e2fde4a62c8e107a6410ad2065ddc" Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.150981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"208dca76-0c20-4fd9-a685-76144777c48c","Type":"ContainerStarted","Data":"a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd"} Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.151370 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"208dca76-0c20-4fd9-a685-76144777c48c","Type":"ContainerStarted","Data":"17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1"} Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.151402 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.152599 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" event={"ID":"455b92c3-f9ce-4bdf-9472-b41e3f4a1443","Type":"ContainerStarted","Data":"426f484ee66b6c1fd184b9a2bfce9584ab95e10af4fb724f3206c08d58bddff4"} Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.153109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.173843 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.103777861 podStartE2EDuration="8.173818502s" podCreationTimestamp="2026-01-26 18:01:55 +0000 UTC" firstStartedPulling="2026-01-26 18:01:56.311013537 +0000 UTC m=+1085.018149670" lastFinishedPulling="2026-01-26 18:02:02.381054178 +0000 UTC m=+1091.088190311" observedRunningTime="2026-01-26 18:02:03.16631836 +0000 UTC m=+1091.873454503" watchObservedRunningTime="2026-01-26 18:02:03.173818502 +0000 UTC m=+1091.880954635" Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.188941 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" podStartSLOduration=4.1889256679999995 podStartE2EDuration="4.188925668s" podCreationTimestamp="2026-01-26 18:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:03.183737472 +0000 UTC m=+1091.890873605" watchObservedRunningTime="2026-01-26 18:02:03.188925668 +0000 UTC m=+1091.896061791" Jan 26 18:02:03 crc kubenswrapper[4787]: I0126 18:02:03.597367 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" path="/var/lib/kubelet/pods/8efcb7e9-652e-45a5-b9cf-cc8e68fed70a/volumes" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.031424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:04 crc kubenswrapper[4787]: E0126 18:02:04.031650 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:02:04 crc kubenswrapper[4787]: E0126 18:02:04.031674 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:02:04 crc kubenswrapper[4787]: E0126 18:02:04.031733 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift podName:fba57dd7-3de5-4e62-817d-4fc2c295ddee nodeName:}" failed. No retries permitted until 2026-01-26 18:02:08.031715803 +0000 UTC m=+1096.738851936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift") pod "swift-storage-0" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee") : configmap "swift-ring-files" not found Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.140435 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-swnmc"] Jan 26 18:02:04 crc kubenswrapper[4787]: E0126 18:02:04.140779 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerName="init" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.140803 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerName="init" Jan 26 18:02:04 crc kubenswrapper[4787]: E0126 18:02:04.140836 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerName="dnsmasq-dns" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.140842 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerName="dnsmasq-dns" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.141066 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efcb7e9-652e-45a5-b9cf-cc8e68fed70a" containerName="dnsmasq-dns" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.141597 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.143511 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.143867 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.144220 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.152176 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-swnmc"] Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.214244 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-swnmc"] Jan 26 18:02:04 crc kubenswrapper[4787]: E0126 18:02:04.214811 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-f2m9f ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-swnmc" podUID="32000377-fbd8-48d0-a228-b85f852b8a82" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.232990 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zz9w2"] Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.234032 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235556 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/32000377-fbd8-48d0-a228-b85f852b8a82-etc-swift\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2m9f\" (UniqueName: \"kubernetes.io/projected/32000377-fbd8-48d0-a228-b85f852b8a82-kube-api-access-f2m9f\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235622 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-scripts\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235686 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-combined-ca-bundle\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235839 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-ring-data-devices\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235859 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-swiftconf\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.235898 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-dispersionconf\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.245966 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zz9w2"] Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337490 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-dispersionconf\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337574 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-swiftconf\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337616 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6dfd1be0-7c5e-427e-8847-28e938c19844-etc-swift\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-combined-ca-bundle\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337672 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/32000377-fbd8-48d0-a228-b85f852b8a82-etc-swift\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337704 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2m9f\" (UniqueName: \"kubernetes.io/projected/32000377-fbd8-48d0-a228-b85f852b8a82-kube-api-access-f2m9f\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337728 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8fm\" (UniqueName: \"kubernetes.io/projected/6dfd1be0-7c5e-427e-8847-28e938c19844-kube-api-access-hv8fm\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-ring-data-devices\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337788 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-scripts\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.337974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-combined-ca-bundle\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.338200 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/32000377-fbd8-48d0-a228-b85f852b8a82-etc-swift\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.338235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-scripts\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.338320 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-ring-data-devices\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.338351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-dispersionconf\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.338409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-swiftconf\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.338692 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-scripts\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.339134 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-ring-data-devices\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.342797 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-dispersionconf\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.346520 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-combined-ca-bundle\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.347404 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-swiftconf\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.355025 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2m9f\" (UniqueName: \"kubernetes.io/projected/32000377-fbd8-48d0-a228-b85f852b8a82-kube-api-access-f2m9f\") pod \"swift-ring-rebalance-swnmc\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.439776 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv8fm\" (UniqueName: \"kubernetes.io/projected/6dfd1be0-7c5e-427e-8847-28e938c19844-kube-api-access-hv8fm\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.439830 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-ring-data-devices\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.439927 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-scripts\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.439972 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-dispersionconf\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.440023 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-swiftconf\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.440048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6dfd1be0-7c5e-427e-8847-28e938c19844-etc-swift\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.440063 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-combined-ca-bundle\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.440844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-scripts\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.440962 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-ring-data-devices\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.441148 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6dfd1be0-7c5e-427e-8847-28e938c19844-etc-swift\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.443325 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-swiftconf\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.446316 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-dispersionconf\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.448543 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-combined-ca-bundle\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.470401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv8fm\" (UniqueName: \"kubernetes.io/projected/6dfd1be0-7c5e-427e-8847-28e938c19844-kube-api-access-hv8fm\") pod \"swift-ring-rebalance-zz9w2\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.547688 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:04 crc kubenswrapper[4787]: I0126 18:02:04.965035 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zz9w2"] Jan 26 18:02:05 crc kubenswrapper[4787]: E0126 18:02:05.161127 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:58758->38.102.83.69:41761: write tcp 38.102.83.69:58758->38.102.83.69:41761: write: broken pipe Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.168126 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz9w2" event={"ID":"6dfd1be0-7c5e-427e-8847-28e938c19844","Type":"ContainerStarted","Data":"a098d75cb8c2ec05f9eae896d8463463d10fe4896299e9fa69fe6908c7b0bb2d"} Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.168173 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.179829 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.255996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-combined-ca-bundle\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.256087 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-swiftconf\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.256118 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2m9f\" (UniqueName: \"kubernetes.io/projected/32000377-fbd8-48d0-a228-b85f852b8a82-kube-api-access-f2m9f\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.256156 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-ring-data-devices\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.256297 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-dispersionconf\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.256348 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/32000377-fbd8-48d0-a228-b85f852b8a82-etc-swift\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.256417 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-scripts\") pod \"32000377-fbd8-48d0-a228-b85f852b8a82\" (UID: \"32000377-fbd8-48d0-a228-b85f852b8a82\") " Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.257565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-scripts" (OuterVolumeSpecName: "scripts") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.257895 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.258130 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32000377-fbd8-48d0-a228-b85f852b8a82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.263258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.263310 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.263333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.263405 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32000377-fbd8-48d0-a228-b85f852b8a82-kube-api-access-f2m9f" (OuterVolumeSpecName: "kube-api-access-f2m9f") pod "32000377-fbd8-48d0-a228-b85f852b8a82" (UID: "32000377-fbd8-48d0-a228-b85f852b8a82"). InnerVolumeSpecName "kube-api-access-f2m9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359190 4787 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359238 4787 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/32000377-fbd8-48d0-a228-b85f852b8a82-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359253 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359266 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359281 4787 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/32000377-fbd8-48d0-a228-b85f852b8a82-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359293 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2m9f\" (UniqueName: \"kubernetes.io/projected/32000377-fbd8-48d0-a228-b85f852b8a82-kube-api-access-f2m9f\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.359308 4787 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/32000377-fbd8-48d0-a228-b85f852b8a82-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.475138 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 18:02:05 crc kubenswrapper[4787]: I0126 18:02:05.475264 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 18:02:06 crc kubenswrapper[4787]: I0126 18:02:06.175253 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-swnmc" Jan 26 18:02:06 crc kubenswrapper[4787]: I0126 18:02:06.236730 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-swnmc"] Jan 26 18:02:06 crc kubenswrapper[4787]: I0126 18:02:06.245959 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-swnmc"] Jan 26 18:02:06 crc kubenswrapper[4787]: I0126 18:02:06.324708 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 18:02:06 crc kubenswrapper[4787]: E0126 18:02:06.871933 4787 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.69:58766->38.102.83.69:41761: read tcp 38.102.83.69:58766->38.102.83.69:41761: read: connection reset by peer Jan 26 18:02:06 crc kubenswrapper[4787]: I0126 18:02:06.876943 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 18:02:06 crc kubenswrapper[4787]: I0126 18:02:06.877005 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 18:02:07 crc kubenswrapper[4787]: I0126 18:02:07.261362 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 18:02:07 crc kubenswrapper[4787]: I0126 18:02:07.597136 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32000377-fbd8-48d0-a228-b85f852b8a82" path="/var/lib/kubelet/pods/32000377-fbd8-48d0-a228-b85f852b8a82/volumes" Jan 26 18:02:08 crc kubenswrapper[4787]: I0126 18:02:08.115898 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:08 crc kubenswrapper[4787]: E0126 18:02:08.116093 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:02:08 crc kubenswrapper[4787]: E0126 18:02:08.116118 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:02:08 crc kubenswrapper[4787]: E0126 18:02:08.116171 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift podName:fba57dd7-3de5-4e62-817d-4fc2c295ddee nodeName:}" failed. No retries permitted until 2026-01-26 18:02:16.116154956 +0000 UTC m=+1104.823291089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift") pod "swift-storage-0" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee") : configmap "swift-ring-files" not found Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.187860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.204378 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz9w2" event={"ID":"6dfd1be0-7c5e-427e-8847-28e938c19844","Type":"ContainerStarted","Data":"681caf359914c2e80c323e59ae7950bc367a24c1561406e9f672a0e4a771fefd"} Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.237384 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zz9w2" podStartSLOduration=1.8772025490000002 podStartE2EDuration="5.237370316s" podCreationTimestamp="2026-01-26 18:02:04 +0000 UTC" firstStartedPulling="2026-01-26 18:02:04.974327438 +0000 UTC m=+1093.681463571" lastFinishedPulling="2026-01-26 18:02:08.334495205 +0000 UTC m=+1097.041631338" observedRunningTime="2026-01-26 18:02:09.234590628 +0000 UTC m=+1097.941726761" watchObservedRunningTime="2026-01-26 18:02:09.237370316 +0000 UTC m=+1097.944506449" Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.272159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.486302 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.544260 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-hcxwt"] Jan 26 18:02:09 crc kubenswrapper[4787]: I0126 18:02:09.544722 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" containerName="dnsmasq-dns" containerID="cri-o://a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a" gracePeriod=10 Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.085267 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.211458 4787 generic.go:334] "Generic (PLEG): container finished" podID="bd9124dd-78e1-4006-b80e-1407323003b1" containerID="a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a" exitCode=0 Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.211511 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" event={"ID":"bd9124dd-78e1-4006-b80e-1407323003b1","Type":"ContainerDied","Data":"a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a"} Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.211562 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" event={"ID":"bd9124dd-78e1-4006-b80e-1407323003b1","Type":"ContainerDied","Data":"434d50744e4a358eff67056bd9558836f97dbb6ae14e28f5ba2f3b65050b9a7a"} Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.211582 4787 scope.go:117] "RemoveContainer" containerID="a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.212387 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-hcxwt" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.242577 4787 scope.go:117] "RemoveContainer" containerID="0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.254139 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-nb\") pod \"bd9124dd-78e1-4006-b80e-1407323003b1\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.254373 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-config\") pod \"bd9124dd-78e1-4006-b80e-1407323003b1\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.254567 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-sb\") pod \"bd9124dd-78e1-4006-b80e-1407323003b1\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.254691 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2n4x\" (UniqueName: \"kubernetes.io/projected/bd9124dd-78e1-4006-b80e-1407323003b1-kube-api-access-c2n4x\") pod \"bd9124dd-78e1-4006-b80e-1407323003b1\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.254773 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-dns-svc\") pod \"bd9124dd-78e1-4006-b80e-1407323003b1\" (UID: \"bd9124dd-78e1-4006-b80e-1407323003b1\") " Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.261476 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9124dd-78e1-4006-b80e-1407323003b1-kube-api-access-c2n4x" (OuterVolumeSpecName: "kube-api-access-c2n4x") pod "bd9124dd-78e1-4006-b80e-1407323003b1" (UID: "bd9124dd-78e1-4006-b80e-1407323003b1"). InnerVolumeSpecName "kube-api-access-c2n4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.263872 4787 scope.go:117] "RemoveContainer" containerID="a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a" Jan 26 18:02:10 crc kubenswrapper[4787]: E0126 18:02:10.264422 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a\": container with ID starting with a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a not found: ID does not exist" containerID="a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.264452 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a"} err="failed to get container status \"a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a\": rpc error: code = NotFound desc = could not find container \"a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a\": container with ID starting with a06bfe23b7fa71e20c9b2797ef3c9b6357c662237da9eb13593d229d5a53125a not found: ID does not exist" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.264476 4787 scope.go:117] "RemoveContainer" containerID="0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281" Jan 26 18:02:10 crc kubenswrapper[4787]: E0126 18:02:10.264817 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281\": container with ID starting with 0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281 not found: ID does not exist" containerID="0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.264840 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281"} err="failed to get container status \"0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281\": rpc error: code = NotFound desc = could not find container \"0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281\": container with ID starting with 0a95635aa4a735f6315e154503e9d1d0721161d675c1019715a281f3dae1c281 not found: ID does not exist" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.298218 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd9124dd-78e1-4006-b80e-1407323003b1" (UID: "bd9124dd-78e1-4006-b80e-1407323003b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.301555 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-config" (OuterVolumeSpecName: "config") pod "bd9124dd-78e1-4006-b80e-1407323003b1" (UID: "bd9124dd-78e1-4006-b80e-1407323003b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.301605 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd9124dd-78e1-4006-b80e-1407323003b1" (UID: "bd9124dd-78e1-4006-b80e-1407323003b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.305679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd9124dd-78e1-4006-b80e-1407323003b1" (UID: "bd9124dd-78e1-4006-b80e-1407323003b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.357025 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2n4x\" (UniqueName: \"kubernetes.io/projected/bd9124dd-78e1-4006-b80e-1407323003b1-kube-api-access-c2n4x\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.357653 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.357735 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.357819 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.357893 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9124dd-78e1-4006-b80e-1407323003b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.566782 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-hcxwt"] Jan 26 18:02:10 crc kubenswrapper[4787]: I0126 18:02:10.575653 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-hcxwt"] Jan 26 18:02:11 crc kubenswrapper[4787]: I0126 18:02:11.601354 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" path="/var/lib/kubelet/pods/bd9124dd-78e1-4006-b80e-1407323003b1/volumes" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.498542 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xnjln"] Jan 26 18:02:12 crc kubenswrapper[4787]: E0126 18:02:12.499301 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" containerName="init" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.499333 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" containerName="init" Jan 26 18:02:12 crc kubenswrapper[4787]: E0126 18:02:12.499354 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" containerName="dnsmasq-dns" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.499364 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" containerName="dnsmasq-dns" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.499659 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9124dd-78e1-4006-b80e-1407323003b1" containerName="dnsmasq-dns" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.500608 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.507804 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-99a8-account-create-update-mb52q"] Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.509874 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.511388 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.515709 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-99a8-account-create-update-mb52q"] Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.542250 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xnjln"] Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.598004 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4d662c-4ce4-4a74-abb0-e751d736d531-operator-scripts\") pod \"glance-db-create-xnjln\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.598068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764c11b4-f0d4-46e4-9742-570e42729ab8-operator-scripts\") pod \"glance-99a8-account-create-update-mb52q\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.598117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhxf\" (UniqueName: \"kubernetes.io/projected/5c4d662c-4ce4-4a74-abb0-e751d736d531-kube-api-access-czhxf\") pod \"glance-db-create-xnjln\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.598143 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hks9\" (UniqueName: \"kubernetes.io/projected/764c11b4-f0d4-46e4-9742-570e42729ab8-kube-api-access-6hks9\") pod \"glance-99a8-account-create-update-mb52q\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.699492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764c11b4-f0d4-46e4-9742-570e42729ab8-operator-scripts\") pod \"glance-99a8-account-create-update-mb52q\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.699555 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czhxf\" (UniqueName: \"kubernetes.io/projected/5c4d662c-4ce4-4a74-abb0-e751d736d531-kube-api-access-czhxf\") pod \"glance-db-create-xnjln\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.699580 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hks9\" (UniqueName: \"kubernetes.io/projected/764c11b4-f0d4-46e4-9742-570e42729ab8-kube-api-access-6hks9\") pod \"glance-99a8-account-create-update-mb52q\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.699665 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4d662c-4ce4-4a74-abb0-e751d736d531-operator-scripts\") pod \"glance-db-create-xnjln\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.700534 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764c11b4-f0d4-46e4-9742-570e42729ab8-operator-scripts\") pod \"glance-99a8-account-create-update-mb52q\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.701346 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4d662c-4ce4-4a74-abb0-e751d736d531-operator-scripts\") pod \"glance-db-create-xnjln\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.735005 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czhxf\" (UniqueName: \"kubernetes.io/projected/5c4d662c-4ce4-4a74-abb0-e751d736d531-kube-api-access-czhxf\") pod \"glance-db-create-xnjln\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.740559 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hks9\" (UniqueName: \"kubernetes.io/projected/764c11b4-f0d4-46e4-9742-570e42729ab8-kube-api-access-6hks9\") pod \"glance-99a8-account-create-update-mb52q\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.830828 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xnjln" Jan 26 18:02:12 crc kubenswrapper[4787]: I0126 18:02:12.844350 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:13 crc kubenswrapper[4787]: W0126 18:02:13.301927 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod764c11b4_f0d4_46e4_9742_570e42729ab8.slice/crio-c1990d91352f84b6121a725bb365ee2bfce92b9221ca0042fbe90a6aa3458642 WatchSource:0}: Error finding container c1990d91352f84b6121a725bb365ee2bfce92b9221ca0042fbe90a6aa3458642: Status 404 returned error can't find the container with id c1990d91352f84b6121a725bb365ee2bfce92b9221ca0042fbe90a6aa3458642 Jan 26 18:02:13 crc kubenswrapper[4787]: I0126 18:02:13.302402 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-99a8-account-create-update-mb52q"] Jan 26 18:02:13 crc kubenswrapper[4787]: I0126 18:02:13.356266 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xnjln"] Jan 26 18:02:13 crc kubenswrapper[4787]: W0126 18:02:13.365032 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c4d662c_4ce4_4a74_abb0_e751d736d531.slice/crio-ee13acfacfc5c50108c5ef38fc55d6af0e604bd3b55da4ff2ee973bcef90d227 WatchSource:0}: Error finding container ee13acfacfc5c50108c5ef38fc55d6af0e604bd3b55da4ff2ee973bcef90d227: Status 404 returned error can't find the container with id ee13acfacfc5c50108c5ef38fc55d6af0e604bd3b55da4ff2ee973bcef90d227 Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.111973 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vq2mj"] Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.113294 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.115700 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.127434 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vq2mj"] Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.225178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9a8b01-53d5-438a-9f13-3b95a44a665c-operator-scripts\") pod \"root-account-create-update-vq2mj\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.225241 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjgtn\" (UniqueName: \"kubernetes.io/projected/6d9a8b01-53d5-438a-9f13-3b95a44a665c-kube-api-access-kjgtn\") pod \"root-account-create-update-vq2mj\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.249537 4787 generic.go:334] "Generic (PLEG): container finished" podID="5c4d662c-4ce4-4a74-abb0-e751d736d531" containerID="2cfdc087326a802bf3ea782b505c896a6531eb26d4b33f6b1a3a2cd365d9e4ff" exitCode=0 Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.249610 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xnjln" event={"ID":"5c4d662c-4ce4-4a74-abb0-e751d736d531","Type":"ContainerDied","Data":"2cfdc087326a802bf3ea782b505c896a6531eb26d4b33f6b1a3a2cd365d9e4ff"} Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.249642 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xnjln" event={"ID":"5c4d662c-4ce4-4a74-abb0-e751d736d531","Type":"ContainerStarted","Data":"ee13acfacfc5c50108c5ef38fc55d6af0e604bd3b55da4ff2ee973bcef90d227"} Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.251936 4787 generic.go:334] "Generic (PLEG): container finished" podID="764c11b4-f0d4-46e4-9742-570e42729ab8" containerID="9e3d9a7712dfab6b98c10d41a68524d0c0d57025f372b62b88317d0bc581d650" exitCode=0 Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.251975 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-99a8-account-create-update-mb52q" event={"ID":"764c11b4-f0d4-46e4-9742-570e42729ab8","Type":"ContainerDied","Data":"9e3d9a7712dfab6b98c10d41a68524d0c0d57025f372b62b88317d0bc581d650"} Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.252040 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-99a8-account-create-update-mb52q" event={"ID":"764c11b4-f0d4-46e4-9742-570e42729ab8","Type":"ContainerStarted","Data":"c1990d91352f84b6121a725bb365ee2bfce92b9221ca0042fbe90a6aa3458642"} Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.326594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjgtn\" (UniqueName: \"kubernetes.io/projected/6d9a8b01-53d5-438a-9f13-3b95a44a665c-kube-api-access-kjgtn\") pod \"root-account-create-update-vq2mj\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.326777 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9a8b01-53d5-438a-9f13-3b95a44a665c-operator-scripts\") pod \"root-account-create-update-vq2mj\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.327634 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9a8b01-53d5-438a-9f13-3b95a44a665c-operator-scripts\") pod \"root-account-create-update-vq2mj\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.375440 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjgtn\" (UniqueName: \"kubernetes.io/projected/6d9a8b01-53d5-438a-9f13-3b95a44a665c-kube-api-access-kjgtn\") pod \"root-account-create-update-vq2mj\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.439272 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:14 crc kubenswrapper[4787]: I0126 18:02:14.709441 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vq2mj"] Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.262126 4787 generic.go:334] "Generic (PLEG): container finished" podID="6d9a8b01-53d5-438a-9f13-3b95a44a665c" containerID="0e03dd74b5f5333431fff1168265eb890ce6b5d97fb05c99c3ba84322732584b" exitCode=0 Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.262181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vq2mj" event={"ID":"6d9a8b01-53d5-438a-9f13-3b95a44a665c","Type":"ContainerDied","Data":"0e03dd74b5f5333431fff1168265eb890ce6b5d97fb05c99c3ba84322732584b"} Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.262487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vq2mj" event={"ID":"6d9a8b01-53d5-438a-9f13-3b95a44a665c","Type":"ContainerStarted","Data":"cda2c351dc7ed62b4fdf9c5f6d9ed1823bd50cf9b240eccd4ee14793a6265be4"} Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.596350 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xnjln" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.754869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czhxf\" (UniqueName: \"kubernetes.io/projected/5c4d662c-4ce4-4a74-abb0-e751d736d531-kube-api-access-czhxf\") pod \"5c4d662c-4ce4-4a74-abb0-e751d736d531\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.755055 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4d662c-4ce4-4a74-abb0-e751d736d531-operator-scripts\") pod \"5c4d662c-4ce4-4a74-abb0-e751d736d531\" (UID: \"5c4d662c-4ce4-4a74-abb0-e751d736d531\") " Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.756155 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4d662c-4ce4-4a74-abb0-e751d736d531-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c4d662c-4ce4-4a74-abb0-e751d736d531" (UID: "5c4d662c-4ce4-4a74-abb0-e751d736d531"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.760973 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4d662c-4ce4-4a74-abb0-e751d736d531-kube-api-access-czhxf" (OuterVolumeSpecName: "kube-api-access-czhxf") pod "5c4d662c-4ce4-4a74-abb0-e751d736d531" (UID: "5c4d662c-4ce4-4a74-abb0-e751d736d531"). InnerVolumeSpecName "kube-api-access-czhxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.802832 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.857080 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czhxf\" (UniqueName: \"kubernetes.io/projected/5c4d662c-4ce4-4a74-abb0-e751d736d531-kube-api-access-czhxf\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.857113 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4d662c-4ce4-4a74-abb0-e751d736d531-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.867505 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.958703 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764c11b4-f0d4-46e4-9742-570e42729ab8-operator-scripts\") pod \"764c11b4-f0d4-46e4-9742-570e42729ab8\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.958791 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hks9\" (UniqueName: \"kubernetes.io/projected/764c11b4-f0d4-46e4-9742-570e42729ab8-kube-api-access-6hks9\") pod \"764c11b4-f0d4-46e4-9742-570e42729ab8\" (UID: \"764c11b4-f0d4-46e4-9742-570e42729ab8\") " Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.960140 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/764c11b4-f0d4-46e4-9742-570e42729ab8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "764c11b4-f0d4-46e4-9742-570e42729ab8" (UID: "764c11b4-f0d4-46e4-9742-570e42729ab8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:15 crc kubenswrapper[4787]: I0126 18:02:15.963177 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764c11b4-f0d4-46e4-9742-570e42729ab8-kube-api-access-6hks9" (OuterVolumeSpecName: "kube-api-access-6hks9") pod "764c11b4-f0d4-46e4-9742-570e42729ab8" (UID: "764c11b4-f0d4-46e4-9742-570e42729ab8"). InnerVolumeSpecName "kube-api-access-6hks9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.061422 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hks9\" (UniqueName: \"kubernetes.io/projected/764c11b4-f0d4-46e4-9742-570e42729ab8-kube-api-access-6hks9\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.061482 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764c11b4-f0d4-46e4-9742-570e42729ab8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.164049 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:16 crc kubenswrapper[4787]: E0126 18:02:16.164210 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:02:16 crc kubenswrapper[4787]: E0126 18:02:16.164247 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:02:16 crc kubenswrapper[4787]: E0126 18:02:16.164308 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift podName:fba57dd7-3de5-4e62-817d-4fc2c295ddee nodeName:}" failed. No retries permitted until 2026-01-26 18:02:32.164290624 +0000 UTC m=+1120.871426757 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift") pod "swift-storage-0" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee") : configmap "swift-ring-files" not found Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.274429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-99a8-account-create-update-mb52q" event={"ID":"764c11b4-f0d4-46e4-9742-570e42729ab8","Type":"ContainerDied","Data":"c1990d91352f84b6121a725bb365ee2bfce92b9221ca0042fbe90a6aa3458642"} Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.274473 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1990d91352f84b6121a725bb365ee2bfce92b9221ca0042fbe90a6aa3458642" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.274466 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-mb52q" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.276500 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xnjln" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.276529 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xnjln" event={"ID":"5c4d662c-4ce4-4a74-abb0-e751d736d531","Type":"ContainerDied","Data":"ee13acfacfc5c50108c5ef38fc55d6af0e604bd3b55da4ff2ee973bcef90d227"} Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.276597 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee13acfacfc5c50108c5ef38fc55d6af0e604bd3b55da4ff2ee973bcef90d227" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.653545 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.671764 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9a8b01-53d5-438a-9f13-3b95a44a665c-operator-scripts\") pod \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.672400 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d9a8b01-53d5-438a-9f13-3b95a44a665c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d9a8b01-53d5-438a-9f13-3b95a44a665c" (UID: "6d9a8b01-53d5-438a-9f13-3b95a44a665c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.773534 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjgtn\" (UniqueName: \"kubernetes.io/projected/6d9a8b01-53d5-438a-9f13-3b95a44a665c-kube-api-access-kjgtn\") pod \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\" (UID: \"6d9a8b01-53d5-438a-9f13-3b95a44a665c\") " Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.774432 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9a8b01-53d5-438a-9f13-3b95a44a665c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.779437 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d9a8b01-53d5-438a-9f13-3b95a44a665c-kube-api-access-kjgtn" (OuterVolumeSpecName: "kube-api-access-kjgtn") pod "6d9a8b01-53d5-438a-9f13-3b95a44a665c" (UID: "6d9a8b01-53d5-438a-9f13-3b95a44a665c"). InnerVolumeSpecName "kube-api-access-kjgtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.803395 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9664z"] Jan 26 18:02:16 crc kubenswrapper[4787]: E0126 18:02:16.803847 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d9a8b01-53d5-438a-9f13-3b95a44a665c" containerName="mariadb-account-create-update" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.803872 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d9a8b01-53d5-438a-9f13-3b95a44a665c" containerName="mariadb-account-create-update" Jan 26 18:02:16 crc kubenswrapper[4787]: E0126 18:02:16.803889 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d662c-4ce4-4a74-abb0-e751d736d531" containerName="mariadb-database-create" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.803897 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d662c-4ce4-4a74-abb0-e751d736d531" containerName="mariadb-database-create" Jan 26 18:02:16 crc kubenswrapper[4787]: E0126 18:02:16.803914 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764c11b4-f0d4-46e4-9742-570e42729ab8" containerName="mariadb-account-create-update" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.803923 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="764c11b4-f0d4-46e4-9742-570e42729ab8" containerName="mariadb-account-create-update" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.804144 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4d662c-4ce4-4a74-abb0-e751d736d531" containerName="mariadb-database-create" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.804171 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="764c11b4-f0d4-46e4-9742-570e42729ab8" containerName="mariadb-account-create-update" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.804188 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d9a8b01-53d5-438a-9f13-3b95a44a665c" containerName="mariadb-account-create-update" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.804821 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9664z" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.811381 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9664z"] Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.874915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4577a0-e573-4acc-9d34-0b42da7381f8-operator-scripts\") pod \"keystone-db-create-9664z\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " pod="openstack/keystone-db-create-9664z" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.875038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcs8\" (UniqueName: \"kubernetes.io/projected/2d4577a0-e573-4acc-9d34-0b42da7381f8-kube-api-access-tlcs8\") pod \"keystone-db-create-9664z\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " pod="openstack/keystone-db-create-9664z" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.875175 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjgtn\" (UniqueName: \"kubernetes.io/projected/6d9a8b01-53d5-438a-9f13-3b95a44a665c-kube-api-access-kjgtn\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.889812 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b1b9-account-create-update-7ghm2"] Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.891074 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.893430 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.902331 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b1b9-account-create-update-7ghm2"] Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.976263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4577a0-e573-4acc-9d34-0b42da7381f8-operator-scripts\") pod \"keystone-db-create-9664z\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " pod="openstack/keystone-db-create-9664z" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.976574 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlcs8\" (UniqueName: \"kubernetes.io/projected/2d4577a0-e573-4acc-9d34-0b42da7381f8-kube-api-access-tlcs8\") pod \"keystone-db-create-9664z\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " pod="openstack/keystone-db-create-9664z" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.977023 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4577a0-e573-4acc-9d34-0b42da7381f8-operator-scripts\") pod \"keystone-db-create-9664z\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " pod="openstack/keystone-db-create-9664z" Jan 26 18:02:16 crc kubenswrapper[4787]: I0126 18:02:16.992780 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlcs8\" (UniqueName: \"kubernetes.io/projected/2d4577a0-e573-4acc-9d34-0b42da7381f8-kube-api-access-tlcs8\") pod \"keystone-db-create-9664z\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " pod="openstack/keystone-db-create-9664z" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.078292 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cc058b4-5013-4418-ba61-1d9f98b624af-operator-scripts\") pod \"keystone-b1b9-account-create-update-7ghm2\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.078353 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxs5\" (UniqueName: \"kubernetes.io/projected/1cc058b4-5013-4418-ba61-1d9f98b624af-kube-api-access-4jxs5\") pod \"keystone-b1b9-account-create-update-7ghm2\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.106534 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-44bz2"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.107723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.114893 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-44bz2"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.138025 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9664z" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.182480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cc058b4-5013-4418-ba61-1d9f98b624af-operator-scripts\") pod \"keystone-b1b9-account-create-update-7ghm2\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.183010 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxs5\" (UniqueName: \"kubernetes.io/projected/1cc058b4-5013-4418-ba61-1d9f98b624af-kube-api-access-4jxs5\") pod \"keystone-b1b9-account-create-update-7ghm2\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.183338 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cc058b4-5013-4418-ba61-1d9f98b624af-operator-scripts\") pod \"keystone-b1b9-account-create-update-7ghm2\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.191754 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2c05-account-create-update-bmkzn"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.192694 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.197285 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.202355 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2c05-account-create-update-bmkzn"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.219474 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxs5\" (UniqueName: \"kubernetes.io/projected/1cc058b4-5013-4418-ba61-1d9f98b624af-kube-api-access-4jxs5\") pod \"keystone-b1b9-account-create-update-7ghm2\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.284240 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1973e32a-6474-4322-b96b-7ac80f7017cc-operator-scripts\") pod \"placement-db-create-44bz2\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.284559 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7w8q\" (UniqueName: \"kubernetes.io/projected/1973e32a-6474-4322-b96b-7ac80f7017cc-kube-api-access-w7w8q\") pod \"placement-db-create-44bz2\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.289760 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vq2mj" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.290350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vq2mj" event={"ID":"6d9a8b01-53d5-438a-9f13-3b95a44a665c","Type":"ContainerDied","Data":"cda2c351dc7ed62b4fdf9c5f6d9ed1823bd50cf9b240eccd4ee14793a6265be4"} Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.290415 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda2c351dc7ed62b4fdf9c5f6d9ed1823bd50cf9b240eccd4ee14793a6265be4" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.293531 4787 generic.go:334] "Generic (PLEG): container finished" podID="6dfd1be0-7c5e-427e-8847-28e938c19844" containerID="681caf359914c2e80c323e59ae7950bc367a24c1561406e9f672a0e4a771fefd" exitCode=0 Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.293572 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz9w2" event={"ID":"6dfd1be0-7c5e-427e-8847-28e938c19844","Type":"ContainerDied","Data":"681caf359914c2e80c323e59ae7950bc367a24c1561406e9f672a0e4a771fefd"} Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.369671 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9664z"] Jan 26 18:02:17 crc kubenswrapper[4787]: W0126 18:02:17.369682 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d4577a0_e573_4acc_9d34_0b42da7381f8.slice/crio-e4d651e09af7ed0042c9f4d14dbcf9e9b38eb80932c15d3a3bb87003e44a0356 WatchSource:0}: Error finding container e4d651e09af7ed0042c9f4d14dbcf9e9b38eb80932c15d3a3bb87003e44a0356: Status 404 returned error can't find the container with id e4d651e09af7ed0042c9f4d14dbcf9e9b38eb80932c15d3a3bb87003e44a0356 Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.386990 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7w8q\" (UniqueName: \"kubernetes.io/projected/1973e32a-6474-4322-b96b-7ac80f7017cc-kube-api-access-w7w8q\") pod \"placement-db-create-44bz2\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.387064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2cg\" (UniqueName: \"kubernetes.io/projected/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-kube-api-access-tw2cg\") pod \"placement-2c05-account-create-update-bmkzn\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.387209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1973e32a-6474-4322-b96b-7ac80f7017cc-operator-scripts\") pod \"placement-db-create-44bz2\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.387503 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-operator-scripts\") pod \"placement-2c05-account-create-update-bmkzn\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.389558 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1973e32a-6474-4322-b96b-7ac80f7017cc-operator-scripts\") pod \"placement-db-create-44bz2\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.403856 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7w8q\" (UniqueName: \"kubernetes.io/projected/1973e32a-6474-4322-b96b-7ac80f7017cc-kube-api-access-w7w8q\") pod \"placement-db-create-44bz2\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.430387 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-44bz2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.488994 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-operator-scripts\") pod \"placement-2c05-account-create-update-bmkzn\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.489077 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw2cg\" (UniqueName: \"kubernetes.io/projected/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-kube-api-access-tw2cg\") pod \"placement-2c05-account-create-update-bmkzn\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.489869 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-operator-scripts\") pod \"placement-2c05-account-create-update-bmkzn\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.511088 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.513096 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw2cg\" (UniqueName: \"kubernetes.io/projected/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-kube-api-access-tw2cg\") pod \"placement-2c05-account-create-update-bmkzn\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.597019 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.621536 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5782v"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.622476 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.625896 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.625989 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xrhkx" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.630501 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5782v"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.792971 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5kr\" (UniqueName: \"kubernetes.io/projected/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-kube-api-access-sg5kr\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.793042 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-config-data\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.793064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-combined-ca-bundle\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.793110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-db-sync-config-data\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.895658 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-db-sync-config-data\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.895778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg5kr\" (UniqueName: \"kubernetes.io/projected/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-kube-api-access-sg5kr\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.895812 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-config-data\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.895833 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-combined-ca-bundle\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.904910 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-config-data\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.905103 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-db-sync-config-data\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.906118 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-44bz2"] Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.908811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-combined-ca-bundle\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.914568 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg5kr\" (UniqueName: \"kubernetes.io/projected/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-kube-api-access-sg5kr\") pod \"glance-db-sync-5782v\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " pod="openstack/glance-db-sync-5782v" Jan 26 18:02:17 crc kubenswrapper[4787]: I0126 18:02:17.939627 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5782v" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.066287 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b1b9-account-create-update-7ghm2"] Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.148591 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2c05-account-create-update-bmkzn"] Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.306687 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2c05-account-create-update-bmkzn" event={"ID":"eb1a1b8c-7e96-4505-91b3-5816d78a62e8","Type":"ContainerStarted","Data":"8e631e16b23d470451758c24f9584de1bbfde309b8e9852c4c8bf5a88aa12a56"} Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.308524 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-44bz2" event={"ID":"1973e32a-6474-4322-b96b-7ac80f7017cc","Type":"ContainerStarted","Data":"4d31af1d71496499951766cd6b7f2ff55fc08f3ca63a1bb9624f626028d40a74"} Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.311065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9664z" event={"ID":"2d4577a0-e573-4acc-9d34-0b42da7381f8","Type":"ContainerStarted","Data":"2a5be95646268c8afef7ee687a6bc6a9a2b009cf4c92fc8984f1d889785779dd"} Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.311187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9664z" event={"ID":"2d4577a0-e573-4acc-9d34-0b42da7381f8","Type":"ContainerStarted","Data":"e4d651e09af7ed0042c9f4d14dbcf9e9b38eb80932c15d3a3bb87003e44a0356"} Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.312465 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b1b9-account-create-update-7ghm2" event={"ID":"1cc058b4-5013-4418-ba61-1d9f98b624af","Type":"ContainerStarted","Data":"ef850bf8423b8a4639be58078826fd136d882d3601f8595ff912bd09945f2a22"} Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.329885 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9664z" podStartSLOduration=2.329865013 podStartE2EDuration="2.329865013s" podCreationTimestamp="2026-01-26 18:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:18.326230005 +0000 UTC m=+1107.033366158" watchObservedRunningTime="2026-01-26 18:02:18.329865013 +0000 UTC m=+1107.037001146" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.468674 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5782v"] Jan 26 18:02:18 crc kubenswrapper[4787]: W0126 18:02:18.477637 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1eed840_68dc_40c2_b2d7_3d3b350b9a12.slice/crio-035be67d7222a47430da1132b4cb5a816786fc2ca32547c8f91813a55831f3f0 WatchSource:0}: Error finding container 035be67d7222a47430da1132b4cb5a816786fc2ca32547c8f91813a55831f3f0: Status 404 returned error can't find the container with id 035be67d7222a47430da1132b4cb5a816786fc2ca32547c8f91813a55831f3f0 Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.571112 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.708325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6dfd1be0-7c5e-427e-8847-28e938c19844-etc-swift\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.708707 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-combined-ca-bundle\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.708769 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-ring-data-devices\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.708797 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv8fm\" (UniqueName: \"kubernetes.io/projected/6dfd1be0-7c5e-427e-8847-28e938c19844-kube-api-access-hv8fm\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.708862 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-swiftconf\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.708934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-dispersionconf\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.709009 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-scripts\") pod \"6dfd1be0-7c5e-427e-8847-28e938c19844\" (UID: \"6dfd1be0-7c5e-427e-8847-28e938c19844\") " Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.709687 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.713065 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dfd1be0-7c5e-427e-8847-28e938c19844-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.719241 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dfd1be0-7c5e-427e-8847-28e938c19844-kube-api-access-hv8fm" (OuterVolumeSpecName: "kube-api-access-hv8fm") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "kube-api-access-hv8fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.719901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.732517 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-scripts" (OuterVolumeSpecName: "scripts") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.732910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.733258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dfd1be0-7c5e-427e-8847-28e938c19844" (UID: "6dfd1be0-7c5e-427e-8847-28e938c19844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.810939 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv8fm\" (UniqueName: \"kubernetes.io/projected/6dfd1be0-7c5e-427e-8847-28e938c19844-kube-api-access-hv8fm\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.810991 4787 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.811003 4787 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.811014 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.811024 4787 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6dfd1be0-7c5e-427e-8847-28e938c19844-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.811032 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfd1be0-7c5e-427e-8847-28e938c19844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:18 crc kubenswrapper[4787]: I0126 18:02:18.811042 4787 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6dfd1be0-7c5e-427e-8847-28e938c19844-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:19 crc kubenswrapper[4787]: I0126 18:02:19.320519 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5782v" event={"ID":"f1eed840-68dc-40c2-b2d7-3d3b350b9a12","Type":"ContainerStarted","Data":"035be67d7222a47430da1132b4cb5a816786fc2ca32547c8f91813a55831f3f0"} Jan 26 18:02:19 crc kubenswrapper[4787]: I0126 18:02:19.322071 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz9w2" event={"ID":"6dfd1be0-7c5e-427e-8847-28e938c19844","Type":"ContainerDied","Data":"a098d75cb8c2ec05f9eae896d8463463d10fe4896299e9fa69fe6908c7b0bb2d"} Jan 26 18:02:19 crc kubenswrapper[4787]: I0126 18:02:19.322095 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a098d75cb8c2ec05f9eae896d8463463d10fe4896299e9fa69fe6908c7b0bb2d" Jan 26 18:02:19 crc kubenswrapper[4787]: I0126 18:02:19.322119 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz9w2" Jan 26 18:02:19 crc kubenswrapper[4787]: I0126 18:02:19.323540 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-44bz2" event={"ID":"1973e32a-6474-4322-b96b-7ac80f7017cc","Type":"ContainerStarted","Data":"babd92495b61f372c94e077b7addc8748311b610d29565a0044d236beb024c22"} Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.332540 4787 generic.go:334] "Generic (PLEG): container finished" podID="eb1a1b8c-7e96-4505-91b3-5816d78a62e8" containerID="d1aa06f7ee36fc291a73b3de2f60e451ed508ff2df7347718492fffa571221ef" exitCode=0 Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.332670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2c05-account-create-update-bmkzn" event={"ID":"eb1a1b8c-7e96-4505-91b3-5816d78a62e8","Type":"ContainerDied","Data":"d1aa06f7ee36fc291a73b3de2f60e451ed508ff2df7347718492fffa571221ef"} Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.337008 4787 generic.go:334] "Generic (PLEG): container finished" podID="1973e32a-6474-4322-b96b-7ac80f7017cc" containerID="babd92495b61f372c94e077b7addc8748311b610d29565a0044d236beb024c22" exitCode=0 Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.337111 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-44bz2" event={"ID":"1973e32a-6474-4322-b96b-7ac80f7017cc","Type":"ContainerDied","Data":"babd92495b61f372c94e077b7addc8748311b610d29565a0044d236beb024c22"} Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.338877 4787 generic.go:334] "Generic (PLEG): container finished" podID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerID="2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c" exitCode=0 Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.338940 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57e65f25-43dd-4baf-b2fa-7256dcbd452d","Type":"ContainerDied","Data":"2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c"} Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.350296 4787 generic.go:334] "Generic (PLEG): container finished" podID="2d4577a0-e573-4acc-9d34-0b42da7381f8" containerID="2a5be95646268c8afef7ee687a6bc6a9a2b009cf4c92fc8984f1d889785779dd" exitCode=0 Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.350410 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9664z" event={"ID":"2d4577a0-e573-4acc-9d34-0b42da7381f8","Type":"ContainerDied","Data":"2a5be95646268c8afef7ee687a6bc6a9a2b009cf4c92fc8984f1d889785779dd"} Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.352770 4787 generic.go:334] "Generic (PLEG): container finished" podID="1cc058b4-5013-4418-ba61-1d9f98b624af" containerID="a23ae7ff6a07b75da6a80c10f0e64e10bcacdee7dd03abd01b57adb39db6c328" exitCode=0 Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.352822 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b1b9-account-create-update-7ghm2" event={"ID":"1cc058b4-5013-4418-ba61-1d9f98b624af","Type":"ContainerDied","Data":"a23ae7ff6a07b75da6a80c10f0e64e10bcacdee7dd03abd01b57adb39db6c328"} Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.542387 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vq2mj"] Jan 26 18:02:20 crc kubenswrapper[4787]: I0126 18:02:20.549567 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vq2mj"] Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.368906 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb1abb80-0591-49c7-b549-969066392a5a" containerID="48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d" exitCode=0 Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.369009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb1abb80-0591-49c7-b549-969066392a5a","Type":"ContainerDied","Data":"48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d"} Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.376179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57e65f25-43dd-4baf-b2fa-7256dcbd452d","Type":"ContainerStarted","Data":"53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad"} Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.376414 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.609815 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d9a8b01-53d5-438a-9f13-3b95a44a665c" path="/var/lib/kubelet/pods/6d9a8b01-53d5-438a-9f13-3b95a44a665c/volumes" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.635777 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.81869291 podStartE2EDuration="59.635721005s" podCreationTimestamp="2026-01-26 18:01:22 +0000 UTC" firstStartedPulling="2026-01-26 18:01:36.713392232 +0000 UTC m=+1065.420528355" lastFinishedPulling="2026-01-26 18:01:45.530420317 +0000 UTC m=+1074.237556450" observedRunningTime="2026-01-26 18:02:21.42457601 +0000 UTC m=+1110.131712153" watchObservedRunningTime="2026-01-26 18:02:21.635721005 +0000 UTC m=+1110.342857138" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.759631 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.869920 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw2cg\" (UniqueName: \"kubernetes.io/projected/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-kube-api-access-tw2cg\") pod \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.870109 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-operator-scripts\") pod \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\" (UID: \"eb1a1b8c-7e96-4505-91b3-5816d78a62e8\") " Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.870496 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb1a1b8c-7e96-4505-91b3-5816d78a62e8" (UID: "eb1a1b8c-7e96-4505-91b3-5816d78a62e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.876877 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-kube-api-access-tw2cg" (OuterVolumeSpecName: "kube-api-access-tw2cg") pod "eb1a1b8c-7e96-4505-91b3-5816d78a62e8" (UID: "eb1a1b8c-7e96-4505-91b3-5816d78a62e8"). InnerVolumeSpecName "kube-api-access-tw2cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.880388 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9664z" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.885597 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.921678 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-44bz2" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.971004 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxs5\" (UniqueName: \"kubernetes.io/projected/1cc058b4-5013-4418-ba61-1d9f98b624af-kube-api-access-4jxs5\") pod \"1cc058b4-5013-4418-ba61-1d9f98b624af\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.971114 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4577a0-e573-4acc-9d34-0b42da7381f8-operator-scripts\") pod \"2d4577a0-e573-4acc-9d34-0b42da7381f8\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.971214 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cc058b4-5013-4418-ba61-1d9f98b624af-operator-scripts\") pod \"1cc058b4-5013-4418-ba61-1d9f98b624af\" (UID: \"1cc058b4-5013-4418-ba61-1d9f98b624af\") " Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.971309 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcs8\" (UniqueName: \"kubernetes.io/projected/2d4577a0-e573-4acc-9d34-0b42da7381f8-kube-api-access-tlcs8\") pod \"2d4577a0-e573-4acc-9d34-0b42da7381f8\" (UID: \"2d4577a0-e573-4acc-9d34-0b42da7381f8\") " Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.971721 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4577a0-e573-4acc-9d34-0b42da7381f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d4577a0-e573-4acc-9d34-0b42da7381f8" (UID: "2d4577a0-e573-4acc-9d34-0b42da7381f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.971848 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc058b4-5013-4418-ba61-1d9f98b624af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1cc058b4-5013-4418-ba61-1d9f98b624af" (UID: "1cc058b4-5013-4418-ba61-1d9f98b624af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.972095 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4577a0-e573-4acc-9d34-0b42da7381f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.972112 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cc058b4-5013-4418-ba61-1d9f98b624af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.972122 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw2cg\" (UniqueName: \"kubernetes.io/projected/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-kube-api-access-tw2cg\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.972133 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a1b8c-7e96-4505-91b3-5816d78a62e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.974921 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4577a0-e573-4acc-9d34-0b42da7381f8-kube-api-access-tlcs8" (OuterVolumeSpecName: "kube-api-access-tlcs8") pod "2d4577a0-e573-4acc-9d34-0b42da7381f8" (UID: "2d4577a0-e573-4acc-9d34-0b42da7381f8"). InnerVolumeSpecName "kube-api-access-tlcs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:21 crc kubenswrapper[4787]: I0126 18:02:21.976642 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc058b4-5013-4418-ba61-1d9f98b624af-kube-api-access-4jxs5" (OuterVolumeSpecName: "kube-api-access-4jxs5") pod "1cc058b4-5013-4418-ba61-1d9f98b624af" (UID: "1cc058b4-5013-4418-ba61-1d9f98b624af"). InnerVolumeSpecName "kube-api-access-4jxs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.073429 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7w8q\" (UniqueName: \"kubernetes.io/projected/1973e32a-6474-4322-b96b-7ac80f7017cc-kube-api-access-w7w8q\") pod \"1973e32a-6474-4322-b96b-7ac80f7017cc\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.073689 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1973e32a-6474-4322-b96b-7ac80f7017cc-operator-scripts\") pod \"1973e32a-6474-4322-b96b-7ac80f7017cc\" (UID: \"1973e32a-6474-4322-b96b-7ac80f7017cc\") " Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.074211 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxs5\" (UniqueName: \"kubernetes.io/projected/1cc058b4-5013-4418-ba61-1d9f98b624af-kube-api-access-4jxs5\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.074240 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlcs8\" (UniqueName: \"kubernetes.io/projected/2d4577a0-e573-4acc-9d34-0b42da7381f8-kube-api-access-tlcs8\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.075313 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1973e32a-6474-4322-b96b-7ac80f7017cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1973e32a-6474-4322-b96b-7ac80f7017cc" (UID: "1973e32a-6474-4322-b96b-7ac80f7017cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.076473 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1973e32a-6474-4322-b96b-7ac80f7017cc-kube-api-access-w7w8q" (OuterVolumeSpecName: "kube-api-access-w7w8q") pod "1973e32a-6474-4322-b96b-7ac80f7017cc" (UID: "1973e32a-6474-4322-b96b-7ac80f7017cc"). InnerVolumeSpecName "kube-api-access-w7w8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.176715 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1973e32a-6474-4322-b96b-7ac80f7017cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.177175 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7w8q\" (UniqueName: \"kubernetes.io/projected/1973e32a-6474-4322-b96b-7ac80f7017cc-kube-api-access-w7w8q\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.388316 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb1abb80-0591-49c7-b549-969066392a5a","Type":"ContainerStarted","Data":"6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd"} Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.388532 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.389709 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9664z" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.389746 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9664z" event={"ID":"2d4577a0-e573-4acc-9d34-0b42da7381f8","Type":"ContainerDied","Data":"e4d651e09af7ed0042c9f4d14dbcf9e9b38eb80932c15d3a3bb87003e44a0356"} Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.389784 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d651e09af7ed0042c9f4d14dbcf9e9b38eb80932c15d3a3bb87003e44a0356" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.393098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b1b9-account-create-update-7ghm2" event={"ID":"1cc058b4-5013-4418-ba61-1d9f98b624af","Type":"ContainerDied","Data":"ef850bf8423b8a4639be58078826fd136d882d3601f8595ff912bd09945f2a22"} Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.393161 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef850bf8423b8a4639be58078826fd136d882d3601f8595ff912bd09945f2a22" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.393231 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-7ghm2" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.396568 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2c05-account-create-update-bmkzn" event={"ID":"eb1a1b8c-7e96-4505-91b3-5816d78a62e8","Type":"ContainerDied","Data":"8e631e16b23d470451758c24f9584de1bbfde309b8e9852c4c8bf5a88aa12a56"} Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.396603 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e631e16b23d470451758c24f9584de1bbfde309b8e9852c4c8bf5a88aa12a56" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.396608 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c05-account-create-update-bmkzn" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.403637 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-44bz2" event={"ID":"1973e32a-6474-4322-b96b-7ac80f7017cc","Type":"ContainerDied","Data":"4d31af1d71496499951766cd6b7f2ff55fc08f3ca63a1bb9624f626028d40a74"} Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.403692 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d31af1d71496499951766cd6b7f2ff55fc08f3ca63a1bb9624f626028d40a74" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.403701 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-44bz2" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.420507 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.101915802 podStartE2EDuration="1m0.420486145s" podCreationTimestamp="2026-01-26 18:01:22 +0000 UTC" firstStartedPulling="2026-01-26 18:01:36.723633321 +0000 UTC m=+1065.430769454" lastFinishedPulling="2026-01-26 18:01:46.042203664 +0000 UTC m=+1074.749339797" observedRunningTime="2026-01-26 18:02:22.408176406 +0000 UTC m=+1111.115312539" watchObservedRunningTime="2026-01-26 18:02:22.420486145 +0000 UTC m=+1111.127622278" Jan 26 18:02:22 crc kubenswrapper[4787]: I0126 18:02:22.588350 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:02:22 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:02:22 crc kubenswrapper[4787]: > Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.550108 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-698h4"] Jan 26 18:02:25 crc kubenswrapper[4787]: E0126 18:02:25.550767 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1a1b8c-7e96-4505-91b3-5816d78a62e8" containerName="mariadb-account-create-update" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.550779 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1a1b8c-7e96-4505-91b3-5816d78a62e8" containerName="mariadb-account-create-update" Jan 26 18:02:25 crc kubenswrapper[4787]: E0126 18:02:25.550793 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc058b4-5013-4418-ba61-1d9f98b624af" containerName="mariadb-account-create-update" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.550799 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc058b4-5013-4418-ba61-1d9f98b624af" containerName="mariadb-account-create-update" Jan 26 18:02:25 crc kubenswrapper[4787]: E0126 18:02:25.550810 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4577a0-e573-4acc-9d34-0b42da7381f8" containerName="mariadb-database-create" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.550816 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4577a0-e573-4acc-9d34-0b42da7381f8" containerName="mariadb-database-create" Jan 26 18:02:25 crc kubenswrapper[4787]: E0126 18:02:25.550825 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1973e32a-6474-4322-b96b-7ac80f7017cc" containerName="mariadb-database-create" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.550831 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1973e32a-6474-4322-b96b-7ac80f7017cc" containerName="mariadb-database-create" Jan 26 18:02:25 crc kubenswrapper[4787]: E0126 18:02:25.550849 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfd1be0-7c5e-427e-8847-28e938c19844" containerName="swift-ring-rebalance" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.550855 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfd1be0-7c5e-427e-8847-28e938c19844" containerName="swift-ring-rebalance" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.551043 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4577a0-e573-4acc-9d34-0b42da7381f8" containerName="mariadb-database-create" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.551059 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1973e32a-6474-4322-b96b-7ac80f7017cc" containerName="mariadb-database-create" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.551071 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc058b4-5013-4418-ba61-1d9f98b624af" containerName="mariadb-account-create-update" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.551078 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1a1b8c-7e96-4505-91b3-5816d78a62e8" containerName="mariadb-account-create-update" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.551088 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dfd1be0-7c5e-427e-8847-28e938c19844" containerName="swift-ring-rebalance" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.551533 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.554573 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.564923 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-698h4"] Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.710185 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbfk\" (UniqueName: \"kubernetes.io/projected/64bf3812-5e05-4d90-89d9-456487b0ce0f-kube-api-access-nlbfk\") pod \"root-account-create-update-698h4\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.710249 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bf3812-5e05-4d90-89d9-456487b0ce0f-operator-scripts\") pod \"root-account-create-update-698h4\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.811667 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbfk\" (UniqueName: \"kubernetes.io/projected/64bf3812-5e05-4d90-89d9-456487b0ce0f-kube-api-access-nlbfk\") pod \"root-account-create-update-698h4\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.811788 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bf3812-5e05-4d90-89d9-456487b0ce0f-operator-scripts\") pod \"root-account-create-update-698h4\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.812634 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bf3812-5e05-4d90-89d9-456487b0ce0f-operator-scripts\") pod \"root-account-create-update-698h4\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.833002 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbfk\" (UniqueName: \"kubernetes.io/projected/64bf3812-5e05-4d90-89d9-456487b0ce0f-kube-api-access-nlbfk\") pod \"root-account-create-update-698h4\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " pod="openstack/root-account-create-update-698h4" Jan 26 18:02:25 crc kubenswrapper[4787]: I0126 18:02:25.878055 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-698h4" Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.585234 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:02:27 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:02:27 crc kubenswrapper[4787]: > Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.631657 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.632101 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.863739 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5rlw8-config-rqsnx"] Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.865112 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.869396 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 18:02:27 crc kubenswrapper[4787]: I0126 18:02:27.875238 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5rlw8-config-rqsnx"] Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.052770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.052858 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-log-ovn\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.053038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-additional-scripts\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.053286 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-scripts\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.053359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m88\" (UniqueName: \"kubernetes.io/projected/2358959e-a741-4e22-8c4e-9169618a6a67-kube-api-access-f4m88\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.053473 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run-ovn\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.155675 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-scripts\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.155758 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4m88\" (UniqueName: \"kubernetes.io/projected/2358959e-a741-4e22-8c4e-9169618a6a67-kube-api-access-f4m88\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.155840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run-ovn\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.155918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.155985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-log-ovn\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.156082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-additional-scripts\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.156540 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-log-ovn\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.156563 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.156609 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run-ovn\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.157248 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-additional-scripts\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.158283 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-scripts\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.176578 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4m88\" (UniqueName: \"kubernetes.io/projected/2358959e-a741-4e22-8c4e-9169618a6a67-kube-api-access-f4m88\") pod \"ovn-controller-5rlw8-config-rqsnx\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:28 crc kubenswrapper[4787]: I0126 18:02:28.191183 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:32 crc kubenswrapper[4787]: I0126 18:02:32.223503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:32 crc kubenswrapper[4787]: I0126 18:02:32.235839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"swift-storage-0\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " pod="openstack/swift-storage-0" Jan 26 18:02:32 crc kubenswrapper[4787]: I0126 18:02:32.352399 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 18:02:32 crc kubenswrapper[4787]: I0126 18:02:32.583378 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:02:32 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:02:32 crc kubenswrapper[4787]: > Jan 26 18:02:33 crc kubenswrapper[4787]: I0126 18:02:33.707221 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:02:34 crc kubenswrapper[4787]: I0126 18:02:34.077103 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 18:02:34 crc kubenswrapper[4787]: E0126 18:02:34.779383 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 26 18:02:34 crc kubenswrapper[4787]: E0126 18:02:34.779802 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sg5kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-5782v_openstack(f1eed840-68dc-40c2-b2d7-3d3b350b9a12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:02:34 crc kubenswrapper[4787]: E0126 18:02:34.781179 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-5782v" podUID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.108313 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-698h4"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.203874 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5rlw8-config-rqsnx"] Jan 26 18:02:35 crc kubenswrapper[4787]: W0126 18:02:35.216585 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2358959e_a741_4e22_8c4e_9169618a6a67.slice/crio-f486c6acede0c9c5cb4aa76fa678ebf4f3f0173ebc8b02e2147c8073636d5f32 WatchSource:0}: Error finding container f486c6acede0c9c5cb4aa76fa678ebf4f3f0173ebc8b02e2147c8073636d5f32: Status 404 returned error can't find the container with id f486c6acede0c9c5cb4aa76fa678ebf4f3f0173ebc8b02e2147c8073636d5f32 Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.315793 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.542507 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-j6kz4"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.543523 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.558252 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j6kz4"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.613984 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"82ed9b31c8b2eedc4f6be3465b840d841b7815ed34c3ec896042eec0a91956b9"} Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.614650 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8-config-rqsnx" event={"ID":"2358959e-a741-4e22-8c4e-9169618a6a67","Type":"ContainerStarted","Data":"06773298540a7b1d6d315f901baf808e6df9fef5a0e024b4b99786ac15da836d"} Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.614695 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8-config-rqsnx" event={"ID":"2358959e-a741-4e22-8c4e-9169618a6a67","Type":"ContainerStarted","Data":"f486c6acede0c9c5cb4aa76fa678ebf4f3f0173ebc8b02e2147c8073636d5f32"} Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.618820 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-698h4" event={"ID":"64bf3812-5e05-4d90-89d9-456487b0ce0f","Type":"ContainerStarted","Data":"08412cf7c17fc076be21e7e9d94ce72409cda76ea7e6fc93e4179d5c528bcd32"} Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.618858 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-698h4" event={"ID":"64bf3812-5e05-4d90-89d9-456487b0ce0f","Type":"ContainerStarted","Data":"f0ac53f62b8c28087ee8c02ce74d3d1d7025e3e247c34db87bbb6a3c0a42ad7b"} Jan 26 18:02:35 crc kubenswrapper[4787]: E0126 18:02:35.619550 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-5782v" podUID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.647804 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xlql4"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.648812 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.651075 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5rlw8-config-rqsnx" podStartSLOduration=8.651061033 podStartE2EDuration="8.651061033s" podCreationTimestamp="2026-01-26 18:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:35.640440656 +0000 UTC m=+1124.347576779" watchObservedRunningTime="2026-01-26 18:02:35.651061033 +0000 UTC m=+1124.358197176" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.682748 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xlql4"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.685035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86edc55e-eba7-4210-998f-92d7d7d5c18c-operator-scripts\") pod \"cinder-db-create-j6kz4\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.685174 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf87\" (UniqueName: \"kubernetes.io/projected/86edc55e-eba7-4210-998f-92d7d7d5c18c-kube-api-access-fdf87\") pod \"cinder-db-create-j6kz4\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.691048 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-698h4" podStartSLOduration=10.691026421 podStartE2EDuration="10.691026421s" podCreationTimestamp="2026-01-26 18:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:35.674965443 +0000 UTC m=+1124.382101586" watchObservedRunningTime="2026-01-26 18:02:35.691026421 +0000 UTC m=+1124.398162554" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.746336 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-69a3-account-create-update-vbh4d"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.747511 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.749921 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.765082 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-69a3-account-create-update-vbh4d"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.791159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdf87\" (UniqueName: \"kubernetes.io/projected/86edc55e-eba7-4210-998f-92d7d7d5c18c-kube-api-access-fdf87\") pod \"cinder-db-create-j6kz4\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.791261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvs5c\" (UniqueName: \"kubernetes.io/projected/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-kube-api-access-dvs5c\") pod \"barbican-db-create-xlql4\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.791500 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86edc55e-eba7-4210-998f-92d7d7d5c18c-operator-scripts\") pod \"cinder-db-create-j6kz4\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.791641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-operator-scripts\") pod \"barbican-db-create-xlql4\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.792219 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86edc55e-eba7-4210-998f-92d7d7d5c18c-operator-scripts\") pod \"cinder-db-create-j6kz4\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.829583 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdf87\" (UniqueName: \"kubernetes.io/projected/86edc55e-eba7-4210-998f-92d7d7d5c18c-kube-api-access-fdf87\") pod \"cinder-db-create-j6kz4\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.868522 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.869451 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b570-account-create-update-62lx7"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.872568 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.877623 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.886521 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b570-account-create-update-62lx7"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.899235 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-operator-scripts\") pod \"barbican-db-create-xlql4\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.899333 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c068e38-f516-49f7-853a-a69b8f7d822d-operator-scripts\") pod \"barbican-69a3-account-create-update-vbh4d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.899370 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97sgb\" (UniqueName: \"kubernetes.io/projected/0c068e38-f516-49f7-853a-a69b8f7d822d-kube-api-access-97sgb\") pod \"barbican-69a3-account-create-update-vbh4d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.899407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvs5c\" (UniqueName: \"kubernetes.io/projected/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-kube-api-access-dvs5c\") pod \"barbican-db-create-xlql4\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.900511 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-operator-scripts\") pod \"barbican-db-create-xlql4\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.935854 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvs5c\" (UniqueName: \"kubernetes.io/projected/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-kube-api-access-dvs5c\") pod \"barbican-db-create-xlql4\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.943688 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dnqzw"] Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.944699 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:35 crc kubenswrapper[4787]: I0126 18:02:35.960164 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dnqzw"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.002524 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c068e38-f516-49f7-853a-a69b8f7d822d-operator-scripts\") pod \"barbican-69a3-account-create-update-vbh4d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.002605 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-operator-scripts\") pod \"cinder-b570-account-create-update-62lx7\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.002652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97sgb\" (UniqueName: \"kubernetes.io/projected/0c068e38-f516-49f7-853a-a69b8f7d822d-kube-api-access-97sgb\") pod \"barbican-69a3-account-create-update-vbh4d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.002709 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w95gq\" (UniqueName: \"kubernetes.io/projected/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-kube-api-access-w95gq\") pod \"cinder-b570-account-create-update-62lx7\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.003837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c068e38-f516-49f7-853a-a69b8f7d822d-operator-scripts\") pod \"barbican-69a3-account-create-update-vbh4d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.015458 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.027197 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-v8c8m"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.028567 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.035487 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.035806 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.035971 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4s9tn" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.036547 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.047360 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97sgb\" (UniqueName: \"kubernetes.io/projected/0c068e38-f516-49f7-853a-a69b8f7d822d-kube-api-access-97sgb\") pod \"barbican-69a3-account-create-update-vbh4d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.049113 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v8c8m"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.062986 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.079606 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-545b-account-create-update-ql5ph"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.080736 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.087379 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.090447 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-545b-account-create-update-ql5ph"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.104567 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-operator-scripts\") pod \"neutron-db-create-dnqzw\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.104666 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-operator-scripts\") pod \"cinder-b570-account-create-update-62lx7\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.104718 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhmgv\" (UniqueName: \"kubernetes.io/projected/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-kube-api-access-mhmgv\") pod \"neutron-db-create-dnqzw\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.104773 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w95gq\" (UniqueName: \"kubernetes.io/projected/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-kube-api-access-w95gq\") pod \"cinder-b570-account-create-update-62lx7\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.113986 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-operator-scripts\") pod \"cinder-b570-account-create-update-62lx7\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.143104 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w95gq\" (UniqueName: \"kubernetes.io/projected/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-kube-api-access-w95gq\") pod \"cinder-b570-account-create-update-62lx7\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206087 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-operator-scripts\") pod \"neutron-db-create-dnqzw\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206144 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcd64\" (UniqueName: \"kubernetes.io/projected/2acdb479-1c97-4508-b835-b04a0b0aa436-kube-api-access-tcd64\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206214 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-combined-ca-bundle\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206254 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhmgv\" (UniqueName: \"kubernetes.io/projected/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-kube-api-access-mhmgv\") pod \"neutron-db-create-dnqzw\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8s6m\" (UniqueName: \"kubernetes.io/projected/a38241c2-8c24-4bc3-91bb-83ee519fe085-kube-api-access-d8s6m\") pod \"neutron-545b-account-create-update-ql5ph\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206349 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-config-data\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.206399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a38241c2-8c24-4bc3-91bb-83ee519fe085-operator-scripts\") pod \"neutron-545b-account-create-update-ql5ph\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.211834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-operator-scripts\") pod \"neutron-db-create-dnqzw\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.228695 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhmgv\" (UniqueName: \"kubernetes.io/projected/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-kube-api-access-mhmgv\") pod \"neutron-db-create-dnqzw\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: E0126 18:02:36.271008 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2358959e_a741_4e22_8c4e_9169618a6a67.slice/crio-conmon-06773298540a7b1d6d315f901baf808e6df9fef5a0e024b4b99786ac15da836d.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.307833 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcd64\" (UniqueName: \"kubernetes.io/projected/2acdb479-1c97-4508-b835-b04a0b0aa436-kube-api-access-tcd64\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.307906 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-combined-ca-bundle\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.307963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8s6m\" (UniqueName: \"kubernetes.io/projected/a38241c2-8c24-4bc3-91bb-83ee519fe085-kube-api-access-d8s6m\") pod \"neutron-545b-account-create-update-ql5ph\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.308034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-config-data\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.308157 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a38241c2-8c24-4bc3-91bb-83ee519fe085-operator-scripts\") pod \"neutron-545b-account-create-update-ql5ph\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.310918 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a38241c2-8c24-4bc3-91bb-83ee519fe085-operator-scripts\") pod \"neutron-545b-account-create-update-ql5ph\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.315854 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-config-data\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.316640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-combined-ca-bundle\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.325683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcd64\" (UniqueName: \"kubernetes.io/projected/2acdb479-1c97-4508-b835-b04a0b0aa436-kube-api-access-tcd64\") pod \"keystone-db-sync-v8c8m\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.328619 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8s6m\" (UniqueName: \"kubernetes.io/projected/a38241c2-8c24-4bc3-91bb-83ee519fe085-kube-api-access-d8s6m\") pod \"neutron-545b-account-create-update-ql5ph\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.336864 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.414483 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.444235 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.452406 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j6kz4"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.462776 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:36 crc kubenswrapper[4787]: W0126 18:02:36.511080 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86edc55e_eba7_4210_998f_92d7d7d5c18c.slice/crio-b90f85b0976a77df1442f686ee221806950d62a881430cf08792e3e8afc7ffc4 WatchSource:0}: Error finding container b90f85b0976a77df1442f686ee221806950d62a881430cf08792e3e8afc7ffc4: Status 404 returned error can't find the container with id b90f85b0976a77df1442f686ee221806950d62a881430cf08792e3e8afc7ffc4 Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.614409 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xlql4"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.641327 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-69a3-account-create-update-vbh4d"] Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.654903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j6kz4" event={"ID":"86edc55e-eba7-4210-998f-92d7d7d5c18c","Type":"ContainerStarted","Data":"b90f85b0976a77df1442f686ee221806950d62a881430cf08792e3e8afc7ffc4"} Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.662625 4787 generic.go:334] "Generic (PLEG): container finished" podID="2358959e-a741-4e22-8c4e-9169618a6a67" containerID="06773298540a7b1d6d315f901baf808e6df9fef5a0e024b4b99786ac15da836d" exitCode=0 Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.662718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8-config-rqsnx" event={"ID":"2358959e-a741-4e22-8c4e-9169618a6a67","Type":"ContainerDied","Data":"06773298540a7b1d6d315f901baf808e6df9fef5a0e024b4b99786ac15da836d"} Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.667771 4787 generic.go:334] "Generic (PLEG): container finished" podID="64bf3812-5e05-4d90-89d9-456487b0ce0f" containerID="08412cf7c17fc076be21e7e9d94ce72409cda76ea7e6fc93e4179d5c528bcd32" exitCode=0 Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.667822 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-698h4" event={"ID":"64bf3812-5e05-4d90-89d9-456487b0ce0f","Type":"ContainerDied","Data":"08412cf7c17fc076be21e7e9d94ce72409cda76ea7e6fc93e4179d5c528bcd32"} Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.671901 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b570-account-create-update-62lx7"] Jan 26 18:02:36 crc kubenswrapper[4787]: W0126 18:02:36.919533 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d82e638_a1aa_4e59_ad3f_34d6f1db8516.slice/crio-f3b794b176c680c252e2ffbd09cff91891c13e9551ccc729e721985f4f954b73 WatchSource:0}: Error finding container f3b794b176c680c252e2ffbd09cff91891c13e9551ccc729e721985f4f954b73: Status 404 returned error can't find the container with id f3b794b176c680c252e2ffbd09cff91891c13e9551ccc729e721985f4f954b73 Jan 26 18:02:36 crc kubenswrapper[4787]: W0126 18:02:36.922880 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf88a529e_5e32_4b4d_b2a8_18a1e4824c88.slice/crio-25bcd1a8e399d86117a6bb8b754ef44eef2fe2504ecb5d9021532f900d6627ad WatchSource:0}: Error finding container 25bcd1a8e399d86117a6bb8b754ef44eef2fe2504ecb5d9021532f900d6627ad: Status 404 returned error can't find the container with id 25bcd1a8e399d86117a6bb8b754ef44eef2fe2504ecb5d9021532f900d6627ad Jan 26 18:02:36 crc kubenswrapper[4787]: I0126 18:02:36.963074 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dnqzw"] Jan 26 18:02:36 crc kubenswrapper[4787]: W0126 18:02:36.978240 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb57b7b82_aa02_4c9e_a8c4_8e21d44f627e.slice/crio-ef5736614e46220cd5573e46e4c16bfc4be38f9fdc0932b7ab3bfd52ae42c181 WatchSource:0}: Error finding container ef5736614e46220cd5573e46e4c16bfc4be38f9fdc0932b7ab3bfd52ae42c181: Status 404 returned error can't find the container with id ef5736614e46220cd5573e46e4c16bfc4be38f9fdc0932b7ab3bfd52ae42c181 Jan 26 18:02:37 crc kubenswrapper[4787]: W0126 18:02:37.486324 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2acdb479_1c97_4508_b835_b04a0b0aa436.slice/crio-32e68b7ee6e18e6a467ca8e533d427a2c35524028fb05f29a9f5f61f0c0eedfb WatchSource:0}: Error finding container 32e68b7ee6e18e6a467ca8e533d427a2c35524028fb05f29a9f5f61f0c0eedfb: Status 404 returned error can't find the container with id 32e68b7ee6e18e6a467ca8e533d427a2c35524028fb05f29a9f5f61f0c0eedfb Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.487027 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v8c8m"] Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.498782 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-545b-account-create-update-ql5ph"] Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.578589 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5rlw8" Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.700778 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.700829 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.703224 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545b-account-create-update-ql5ph" event={"ID":"a38241c2-8c24-4bc3-91bb-83ee519fe085","Type":"ContainerStarted","Data":"b73773124311d8993ce80ec63f02ce88a31057af8ca3ae6a0b1a06b0250d0e16"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.707384 4787 generic.go:334] "Generic (PLEG): container finished" podID="86edc55e-eba7-4210-998f-92d7d7d5c18c" containerID="d9dcca694d36180d243ee6744b654bb1c2546c3b6d701787ebd89a9e1baab732" exitCode=0 Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.707451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j6kz4" event={"ID":"86edc55e-eba7-4210-998f-92d7d7d5c18c","Type":"ContainerDied","Data":"d9dcca694d36180d243ee6744b654bb1c2546c3b6d701787ebd89a9e1baab732"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.709614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v8c8m" event={"ID":"2acdb479-1c97-4508-b835-b04a0b0aa436","Type":"ContainerStarted","Data":"32e68b7ee6e18e6a467ca8e533d427a2c35524028fb05f29a9f5f61f0c0eedfb"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.712056 4787 generic.go:334] "Generic (PLEG): container finished" podID="b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" containerID="4ff750be119f1f06ba313c23cca75fec856e7fe43ccd9c80add76d84992460a9" exitCode=0 Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.712141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dnqzw" event={"ID":"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e","Type":"ContainerDied","Data":"4ff750be119f1f06ba313c23cca75fec856e7fe43ccd9c80add76d84992460a9"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.712171 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dnqzw" event={"ID":"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e","Type":"ContainerStarted","Data":"ef5736614e46220cd5573e46e4c16bfc4be38f9fdc0932b7ab3bfd52ae42c181"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.714194 4787 generic.go:334] "Generic (PLEG): container finished" podID="f88a529e-5e32-4b4d-b2a8-18a1e4824c88" containerID="6a7a7e77f74977071587720834a3241c8456c1b3eccbbadeffeef150ecaa3217" exitCode=0 Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.714268 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b570-account-create-update-62lx7" event={"ID":"f88a529e-5e32-4b4d-b2a8-18a1e4824c88","Type":"ContainerDied","Data":"6a7a7e77f74977071587720834a3241c8456c1b3eccbbadeffeef150ecaa3217"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.714296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b570-account-create-update-62lx7" event={"ID":"f88a529e-5e32-4b4d-b2a8-18a1e4824c88","Type":"ContainerStarted","Data":"25bcd1a8e399d86117a6bb8b754ef44eef2fe2504ecb5d9021532f900d6627ad"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.716039 4787 generic.go:334] "Generic (PLEG): container finished" podID="2d82e638-a1aa-4e59-ad3f-34d6f1db8516" containerID="580fea9853fd3b084dca775b0f8aa270170f8c78ca977524159fe21d4e22535f" exitCode=0 Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.716126 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xlql4" event={"ID":"2d82e638-a1aa-4e59-ad3f-34d6f1db8516","Type":"ContainerDied","Data":"580fea9853fd3b084dca775b0f8aa270170f8c78ca977524159fe21d4e22535f"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.716156 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xlql4" event={"ID":"2d82e638-a1aa-4e59-ad3f-34d6f1db8516","Type":"ContainerStarted","Data":"f3b794b176c680c252e2ffbd09cff91891c13e9551ccc729e721985f4f954b73"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.727265 4787 generic.go:334] "Generic (PLEG): container finished" podID="0c068e38-f516-49f7-853a-a69b8f7d822d" containerID="b57b6191eff8819bd8e4ae8e4e345a4787572c4758499603a31165aceb4fec8d" exitCode=0 Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.727580 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-69a3-account-create-update-vbh4d" event={"ID":"0c068e38-f516-49f7-853a-a69b8f7d822d","Type":"ContainerDied","Data":"b57b6191eff8819bd8e4ae8e4e345a4787572c4758499603a31165aceb4fec8d"} Jan 26 18:02:37 crc kubenswrapper[4787]: I0126 18:02:37.728161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-69a3-account-create-update-vbh4d" event={"ID":"0c068e38-f516-49f7-853a-a69b8f7d822d","Type":"ContainerStarted","Data":"bf06142e09599ecde52e64bf752b8d2ba0e4afc52f4bc28ebfe842a457e20b84"} Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.147960 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.230488 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-698h4" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.243037 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-additional-scripts\") pod \"2358959e-a741-4e22-8c4e-9169618a6a67\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.243433 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4m88\" (UniqueName: \"kubernetes.io/projected/2358959e-a741-4e22-8c4e-9169618a6a67-kube-api-access-f4m88\") pod \"2358959e-a741-4e22-8c4e-9169618a6a67\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.243465 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-log-ovn\") pod \"2358959e-a741-4e22-8c4e-9169618a6a67\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.243671 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run-ovn\") pod \"2358959e-a741-4e22-8c4e-9169618a6a67\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.244739 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run\") pod \"2358959e-a741-4e22-8c4e-9169618a6a67\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.244823 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-scripts\") pod \"2358959e-a741-4e22-8c4e-9169618a6a67\" (UID: \"2358959e-a741-4e22-8c4e-9169618a6a67\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.254879 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2358959e-a741-4e22-8c4e-9169618a6a67-kube-api-access-f4m88" (OuterVolumeSpecName: "kube-api-access-f4m88") pod "2358959e-a741-4e22-8c4e-9169618a6a67" (UID: "2358959e-a741-4e22-8c4e-9169618a6a67"). InnerVolumeSpecName "kube-api-access-f4m88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.255582 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2358959e-a741-4e22-8c4e-9169618a6a67" (UID: "2358959e-a741-4e22-8c4e-9169618a6a67"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.255625 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2358959e-a741-4e22-8c4e-9169618a6a67" (UID: "2358959e-a741-4e22-8c4e-9169618a6a67"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.255649 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2358959e-a741-4e22-8c4e-9169618a6a67" (UID: "2358959e-a741-4e22-8c4e-9169618a6a67"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.255671 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run" (OuterVolumeSpecName: "var-run") pod "2358959e-a741-4e22-8c4e-9169618a6a67" (UID: "2358959e-a741-4e22-8c4e-9169618a6a67"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.269087 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-scripts" (OuterVolumeSpecName: "scripts") pod "2358959e-a741-4e22-8c4e-9169618a6a67" (UID: "2358959e-a741-4e22-8c4e-9169618a6a67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.331865 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5rlw8-config-rqsnx"] Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.338198 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5rlw8-config-rqsnx"] Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.355678 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bf3812-5e05-4d90-89d9-456487b0ce0f-operator-scripts\") pod \"64bf3812-5e05-4d90-89d9-456487b0ce0f\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.355996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbfk\" (UniqueName: \"kubernetes.io/projected/64bf3812-5e05-4d90-89d9-456487b0ce0f-kube-api-access-nlbfk\") pod \"64bf3812-5e05-4d90-89d9-456487b0ce0f\" (UID: \"64bf3812-5e05-4d90-89d9-456487b0ce0f\") " Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356418 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356440 4787 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2358959e-a741-4e22-8c4e-9169618a6a67-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356456 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4m88\" (UniqueName: \"kubernetes.io/projected/2358959e-a741-4e22-8c4e-9169618a6a67-kube-api-access-f4m88\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356468 4787 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356478 4787 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356489 4787 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2358959e-a741-4e22-8c4e-9169618a6a67-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.356605 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64bf3812-5e05-4d90-89d9-456487b0ce0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64bf3812-5e05-4d90-89d9-456487b0ce0f" (UID: "64bf3812-5e05-4d90-89d9-456487b0ce0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.359960 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bf3812-5e05-4d90-89d9-456487b0ce0f-kube-api-access-nlbfk" (OuterVolumeSpecName: "kube-api-access-nlbfk") pod "64bf3812-5e05-4d90-89d9-456487b0ce0f" (UID: "64bf3812-5e05-4d90-89d9-456487b0ce0f"). InnerVolumeSpecName "kube-api-access-nlbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.457713 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bf3812-5e05-4d90-89d9-456487b0ce0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.457753 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbfk\" (UniqueName: \"kubernetes.io/projected/64bf3812-5e05-4d90-89d9-456487b0ce0f-kube-api-access-nlbfk\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.738869 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8"} Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.738908 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0"} Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.741589 4787 generic.go:334] "Generic (PLEG): container finished" podID="a38241c2-8c24-4bc3-91bb-83ee519fe085" containerID="800837c14b59847183af47554bf1431e09ef8a6fe30976624c57af8e04666954" exitCode=0 Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.741640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545b-account-create-update-ql5ph" event={"ID":"a38241c2-8c24-4bc3-91bb-83ee519fe085","Type":"ContainerDied","Data":"800837c14b59847183af47554bf1431e09ef8a6fe30976624c57af8e04666954"} Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.744885 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8-config-rqsnx" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.744892 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f486c6acede0c9c5cb4aa76fa678ebf4f3f0173ebc8b02e2147c8073636d5f32" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.748236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-698h4" event={"ID":"64bf3812-5e05-4d90-89d9-456487b0ce0f","Type":"ContainerDied","Data":"f0ac53f62b8c28087ee8c02ce74d3d1d7025e3e247c34db87bbb6a3c0a42ad7b"} Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.748289 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ac53f62b8c28087ee8c02ce74d3d1d7025e3e247c34db87bbb6a3c0a42ad7b" Jan 26 18:02:38 crc kubenswrapper[4787]: I0126 18:02:38.749632 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-698h4" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.120597 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.235528 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.241782 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.254056 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.272545 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.293369 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhmgv\" (UniqueName: \"kubernetes.io/projected/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-kube-api-access-mhmgv\") pod \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.293493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-operator-scripts\") pod \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\" (UID: \"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.294173 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" (UID: "b57b7b82-aa02-4c9e-a8c4-8e21d44f627e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.298219 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-kube-api-access-mhmgv" (OuterVolumeSpecName: "kube-api-access-mhmgv") pod "b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" (UID: "b57b7b82-aa02-4c9e-a8c4-8e21d44f627e"). InnerVolumeSpecName "kube-api-access-mhmgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.394735 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c068e38-f516-49f7-853a-a69b8f7d822d-operator-scripts\") pod \"0c068e38-f516-49f7-853a-a69b8f7d822d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.394805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w95gq\" (UniqueName: \"kubernetes.io/projected/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-kube-api-access-w95gq\") pod \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.394834 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-operator-scripts\") pod \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\" (UID: \"f88a529e-5e32-4b4d-b2a8-18a1e4824c88\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.394900 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdf87\" (UniqueName: \"kubernetes.io/projected/86edc55e-eba7-4210-998f-92d7d7d5c18c-kube-api-access-fdf87\") pod \"86edc55e-eba7-4210-998f-92d7d7d5c18c\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.394936 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvs5c\" (UniqueName: \"kubernetes.io/projected/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-kube-api-access-dvs5c\") pod \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.394978 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-operator-scripts\") pod \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\" (UID: \"2d82e638-a1aa-4e59-ad3f-34d6f1db8516\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.395028 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97sgb\" (UniqueName: \"kubernetes.io/projected/0c068e38-f516-49f7-853a-a69b8f7d822d-kube-api-access-97sgb\") pod \"0c068e38-f516-49f7-853a-a69b8f7d822d\" (UID: \"0c068e38-f516-49f7-853a-a69b8f7d822d\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.395597 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f88a529e-5e32-4b4d-b2a8-18a1e4824c88" (UID: "f88a529e-5e32-4b4d-b2a8-18a1e4824c88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.395596 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c068e38-f516-49f7-853a-a69b8f7d822d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c068e38-f516-49f7-853a-a69b8f7d822d" (UID: "0c068e38-f516-49f7-853a-a69b8f7d822d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.395732 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d82e638-a1aa-4e59-ad3f-34d6f1db8516" (UID: "2d82e638-a1aa-4e59-ad3f-34d6f1db8516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.395793 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86edc55e-eba7-4210-998f-92d7d7d5c18c-operator-scripts\") pod \"86edc55e-eba7-4210-998f-92d7d7d5c18c\" (UID: \"86edc55e-eba7-4210-998f-92d7d7d5c18c\") " Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.396572 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhmgv\" (UniqueName: \"kubernetes.io/projected/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-kube-api-access-mhmgv\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.396626 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.396641 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.396652 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c068e38-f516-49f7-853a-a69b8f7d822d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.396667 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.396690 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86edc55e-eba7-4210-998f-92d7d7d5c18c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86edc55e-eba7-4210-998f-92d7d7d5c18c" (UID: "86edc55e-eba7-4210-998f-92d7d7d5c18c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.400131 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c068e38-f516-49f7-853a-a69b8f7d822d-kube-api-access-97sgb" (OuterVolumeSpecName: "kube-api-access-97sgb") pod "0c068e38-f516-49f7-853a-a69b8f7d822d" (UID: "0c068e38-f516-49f7-853a-a69b8f7d822d"). InnerVolumeSpecName "kube-api-access-97sgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.400152 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-kube-api-access-dvs5c" (OuterVolumeSpecName: "kube-api-access-dvs5c") pod "2d82e638-a1aa-4e59-ad3f-34d6f1db8516" (UID: "2d82e638-a1aa-4e59-ad3f-34d6f1db8516"). InnerVolumeSpecName "kube-api-access-dvs5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.400113 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86edc55e-eba7-4210-998f-92d7d7d5c18c-kube-api-access-fdf87" (OuterVolumeSpecName: "kube-api-access-fdf87") pod "86edc55e-eba7-4210-998f-92d7d7d5c18c" (UID: "86edc55e-eba7-4210-998f-92d7d7d5c18c"). InnerVolumeSpecName "kube-api-access-fdf87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.403636 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-kube-api-access-w95gq" (OuterVolumeSpecName: "kube-api-access-w95gq") pod "f88a529e-5e32-4b4d-b2a8-18a1e4824c88" (UID: "f88a529e-5e32-4b4d-b2a8-18a1e4824c88"). InnerVolumeSpecName "kube-api-access-w95gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.498132 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w95gq\" (UniqueName: \"kubernetes.io/projected/f88a529e-5e32-4b4d-b2a8-18a1e4824c88-kube-api-access-w95gq\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.498173 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdf87\" (UniqueName: \"kubernetes.io/projected/86edc55e-eba7-4210-998f-92d7d7d5c18c-kube-api-access-fdf87\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.498186 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvs5c\" (UniqueName: \"kubernetes.io/projected/2d82e638-a1aa-4e59-ad3f-34d6f1db8516-kube-api-access-dvs5c\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.498198 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97sgb\" (UniqueName: \"kubernetes.io/projected/0c068e38-f516-49f7-853a-a69b8f7d822d-kube-api-access-97sgb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.498211 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86edc55e-eba7-4210-998f-92d7d7d5c18c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.604960 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2358959e-a741-4e22-8c4e-9169618a6a67" path="/var/lib/kubelet/pods/2358959e-a741-4e22-8c4e-9169618a6a67/volumes" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.756790 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xlql4" event={"ID":"2d82e638-a1aa-4e59-ad3f-34d6f1db8516","Type":"ContainerDied","Data":"f3b794b176c680c252e2ffbd09cff91891c13e9551ccc729e721985f4f954b73"} Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.756836 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b794b176c680c252e2ffbd09cff91891c13e9551ccc729e721985f4f954b73" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.756914 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xlql4" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.759313 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-69a3-account-create-update-vbh4d" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.759314 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-69a3-account-create-update-vbh4d" event={"ID":"0c068e38-f516-49f7-853a-a69b8f7d822d","Type":"ContainerDied","Data":"bf06142e09599ecde52e64bf752b8d2ba0e4afc52f4bc28ebfe842a457e20b84"} Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.759464 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf06142e09599ecde52e64bf752b8d2ba0e4afc52f4bc28ebfe842a457e20b84" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.760829 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j6kz4" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.760841 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j6kz4" event={"ID":"86edc55e-eba7-4210-998f-92d7d7d5c18c","Type":"ContainerDied","Data":"b90f85b0976a77df1442f686ee221806950d62a881430cf08792e3e8afc7ffc4"} Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.760885 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90f85b0976a77df1442f686ee221806950d62a881430cf08792e3e8afc7ffc4" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.763392 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dnqzw" event={"ID":"b57b7b82-aa02-4c9e-a8c4-8e21d44f627e","Type":"ContainerDied","Data":"ef5736614e46220cd5573e46e4c16bfc4be38f9fdc0932b7ab3bfd52ae42c181"} Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.763422 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5736614e46220cd5573e46e4c16bfc4be38f9fdc0932b7ab3bfd52ae42c181" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.763460 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dnqzw" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.766649 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-62lx7" Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.767163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b570-account-create-update-62lx7" event={"ID":"f88a529e-5e32-4b4d-b2a8-18a1e4824c88","Type":"ContainerDied","Data":"25bcd1a8e399d86117a6bb8b754ef44eef2fe2504ecb5d9021532f900d6627ad"} Jan 26 18:02:39 crc kubenswrapper[4787]: I0126 18:02:39.767195 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25bcd1a8e399d86117a6bb8b754ef44eef2fe2504ecb5d9021532f900d6627ad" Jan 26 18:02:40 crc kubenswrapper[4787]: I0126 18:02:40.778614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41"} Jan 26 18:02:40 crc kubenswrapper[4787]: I0126 18:02:40.778663 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2"} Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.821112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545b-account-create-update-ql5ph" event={"ID":"a38241c2-8c24-4bc3-91bb-83ee519fe085","Type":"ContainerDied","Data":"b73773124311d8993ce80ec63f02ce88a31057af8ca3ae6a0b1a06b0250d0e16"} Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.821665 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73773124311d8993ce80ec63f02ce88a31057af8ca3ae6a0b1a06b0250d0e16" Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.866167 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.993148 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a38241c2-8c24-4bc3-91bb-83ee519fe085-operator-scripts\") pod \"a38241c2-8c24-4bc3-91bb-83ee519fe085\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.993560 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8s6m\" (UniqueName: \"kubernetes.io/projected/a38241c2-8c24-4bc3-91bb-83ee519fe085-kube-api-access-d8s6m\") pod \"a38241c2-8c24-4bc3-91bb-83ee519fe085\" (UID: \"a38241c2-8c24-4bc3-91bb-83ee519fe085\") " Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.993968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38241c2-8c24-4bc3-91bb-83ee519fe085-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a38241c2-8c24-4bc3-91bb-83ee519fe085" (UID: "a38241c2-8c24-4bc3-91bb-83ee519fe085"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.994077 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a38241c2-8c24-4bc3-91bb-83ee519fe085-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:44 crc kubenswrapper[4787]: I0126 18:02:44.997537 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38241c2-8c24-4bc3-91bb-83ee519fe085-kube-api-access-d8s6m" (OuterVolumeSpecName: "kube-api-access-d8s6m") pod "a38241c2-8c24-4bc3-91bb-83ee519fe085" (UID: "a38241c2-8c24-4bc3-91bb-83ee519fe085"). InnerVolumeSpecName "kube-api-access-d8s6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:45 crc kubenswrapper[4787]: I0126 18:02:45.095331 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8s6m\" (UniqueName: \"kubernetes.io/projected/a38241c2-8c24-4bc3-91bb-83ee519fe085-kube-api-access-d8s6m\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:45 crc kubenswrapper[4787]: I0126 18:02:45.833409 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d"} Jan 26 18:02:45 crc kubenswrapper[4787]: I0126 18:02:45.833453 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866"} Jan 26 18:02:45 crc kubenswrapper[4787]: I0126 18:02:45.837219 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-ql5ph" Jan 26 18:02:45 crc kubenswrapper[4787]: I0126 18:02:45.839122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v8c8m" event={"ID":"2acdb479-1c97-4508-b835-b04a0b0aa436","Type":"ContainerStarted","Data":"c45530d5418d7a3dd16e8692ccd60dfb284b1c5791080839a02dcb70faac766a"} Jan 26 18:02:45 crc kubenswrapper[4787]: I0126 18:02:45.876227 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-v8c8m" podStartSLOduration=3.676484091 podStartE2EDuration="10.876204067s" podCreationTimestamp="2026-01-26 18:02:35 +0000 UTC" firstStartedPulling="2026-01-26 18:02:37.488572385 +0000 UTC m=+1126.195708518" lastFinishedPulling="2026-01-26 18:02:44.688292361 +0000 UTC m=+1133.395428494" observedRunningTime="2026-01-26 18:02:45.856186962 +0000 UTC m=+1134.563323095" watchObservedRunningTime="2026-01-26 18:02:45.876204067 +0000 UTC m=+1134.583340220" Jan 26 18:02:47 crc kubenswrapper[4787]: I0126 18:02:47.858493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8"} Jan 26 18:02:47 crc kubenswrapper[4787]: I0126 18:02:47.859081 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217"} Jan 26 18:02:47 crc kubenswrapper[4787]: I0126 18:02:47.859092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb"} Jan 26 18:02:47 crc kubenswrapper[4787]: I0126 18:02:47.859102 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae"} Jan 26 18:02:48 crc kubenswrapper[4787]: I0126 18:02:48.874099 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf"} Jan 26 18:02:48 crc kubenswrapper[4787]: I0126 18:02:48.874398 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892"} Jan 26 18:02:48 crc kubenswrapper[4787]: I0126 18:02:48.874410 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerStarted","Data":"9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3"} Jan 26 18:02:48 crc kubenswrapper[4787]: I0126 18:02:48.928187 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.504809029 podStartE2EDuration="49.928162598s" podCreationTimestamp="2026-01-26 18:01:59 +0000 UTC" firstStartedPulling="2026-01-26 18:02:35.337803005 +0000 UTC m=+1124.044939138" lastFinishedPulling="2026-01-26 18:02:46.761156574 +0000 UTC m=+1135.468292707" observedRunningTime="2026-01-26 18:02:48.922582732 +0000 UTC m=+1137.629718885" watchObservedRunningTime="2026-01-26 18:02:48.928162598 +0000 UTC m=+1137.635298741" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.180137 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-f8n2s"] Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.180934 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86edc55e-eba7-4210-998f-92d7d7d5c18c" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181016 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="86edc55e-eba7-4210-998f-92d7d7d5c18c" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181031 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38241c2-8c24-4bc3-91bb-83ee519fe085" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181041 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38241c2-8c24-4bc3-91bb-83ee519fe085" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181058 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bf3812-5e05-4d90-89d9-456487b0ce0f" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181067 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bf3812-5e05-4d90-89d9-456487b0ce0f" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181096 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88a529e-5e32-4b4d-b2a8-18a1e4824c88" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181104 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88a529e-5e32-4b4d-b2a8-18a1e4824c88" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181121 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d82e638-a1aa-4e59-ad3f-34d6f1db8516" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181129 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d82e638-a1aa-4e59-ad3f-34d6f1db8516" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181144 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c068e38-f516-49f7-853a-a69b8f7d822d" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181152 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c068e38-f516-49f7-853a-a69b8f7d822d" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181165 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181173 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: E0126 18:02:49.181185 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2358959e-a741-4e22-8c4e-9169618a6a67" containerName="ovn-config" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181195 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2358959e-a741-4e22-8c4e-9169618a6a67" containerName="ovn-config" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181405 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88a529e-5e32-4b4d-b2a8-18a1e4824c88" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181426 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c068e38-f516-49f7-853a-a69b8f7d822d" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181437 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bf3812-5e05-4d90-89d9-456487b0ce0f" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181449 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2358959e-a741-4e22-8c4e-9169618a6a67" containerName="ovn-config" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181462 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38241c2-8c24-4bc3-91bb-83ee519fe085" containerName="mariadb-account-create-update" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181474 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d82e638-a1aa-4e59-ad3f-34d6f1db8516" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181483 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="86edc55e-eba7-4210-998f-92d7d7d5c18c" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.181493 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" containerName="mariadb-database-create" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.182556 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.185927 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.194122 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-f8n2s"] Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.261665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.261755 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkv9p\" (UniqueName: \"kubernetes.io/projected/fc71ac41-c9cc-4b76-8251-97303837cc30-kube-api-access-rkv9p\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.261837 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.261876 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-config\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.261905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.261930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-svc\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.363719 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.363805 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkv9p\" (UniqueName: \"kubernetes.io/projected/fc71ac41-c9cc-4b76-8251-97303837cc30-kube-api-access-rkv9p\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.363860 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.363884 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-config\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.363907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.363924 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-svc\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.365085 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-config\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.365086 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-svc\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.365192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.365192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.365541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.385059 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkv9p\" (UniqueName: \"kubernetes.io/projected/fc71ac41-c9cc-4b76-8251-97303837cc30-kube-api-access-rkv9p\") pod \"dnsmasq-dns-8db84466c-f8n2s\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.538780 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.883775 4787 generic.go:334] "Generic (PLEG): container finished" podID="2acdb479-1c97-4508-b835-b04a0b0aa436" containerID="c45530d5418d7a3dd16e8692ccd60dfb284b1c5791080839a02dcb70faac766a" exitCode=0 Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.884097 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v8c8m" event={"ID":"2acdb479-1c97-4508-b835-b04a0b0aa436","Type":"ContainerDied","Data":"c45530d5418d7a3dd16e8692ccd60dfb284b1c5791080839a02dcb70faac766a"} Jan 26 18:02:49 crc kubenswrapper[4787]: I0126 18:02:49.964315 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-f8n2s"] Jan 26 18:02:49 crc kubenswrapper[4787]: W0126 18:02:49.964907 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc71ac41_c9cc_4b76_8251_97303837cc30.slice/crio-6350e6055a3a7883b1ab12357fd98c3b9f4915ce065ebc0a624da39cfce25cec WatchSource:0}: Error finding container 6350e6055a3a7883b1ab12357fd98c3b9f4915ce065ebc0a624da39cfce25cec: Status 404 returned error can't find the container with id 6350e6055a3a7883b1ab12357fd98c3b9f4915ce065ebc0a624da39cfce25cec Jan 26 18:02:50 crc kubenswrapper[4787]: I0126 18:02:50.896357 4787 generic.go:334] "Generic (PLEG): container finished" podID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerID="3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865" exitCode=0 Jan 26 18:02:50 crc kubenswrapper[4787]: I0126 18:02:50.896415 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" event={"ID":"fc71ac41-c9cc-4b76-8251-97303837cc30","Type":"ContainerDied","Data":"3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865"} Jan 26 18:02:50 crc kubenswrapper[4787]: I0126 18:02:50.896759 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" event={"ID":"fc71ac41-c9cc-4b76-8251-97303837cc30","Type":"ContainerStarted","Data":"6350e6055a3a7883b1ab12357fd98c3b9f4915ce065ebc0a624da39cfce25cec"} Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.218404 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.293934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-combined-ca-bundle\") pod \"2acdb479-1c97-4508-b835-b04a0b0aa436\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.294084 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-config-data\") pod \"2acdb479-1c97-4508-b835-b04a0b0aa436\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.294366 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcd64\" (UniqueName: \"kubernetes.io/projected/2acdb479-1c97-4508-b835-b04a0b0aa436-kube-api-access-tcd64\") pod \"2acdb479-1c97-4508-b835-b04a0b0aa436\" (UID: \"2acdb479-1c97-4508-b835-b04a0b0aa436\") " Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.297972 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acdb479-1c97-4508-b835-b04a0b0aa436-kube-api-access-tcd64" (OuterVolumeSpecName: "kube-api-access-tcd64") pod "2acdb479-1c97-4508-b835-b04a0b0aa436" (UID: "2acdb479-1c97-4508-b835-b04a0b0aa436"). InnerVolumeSpecName "kube-api-access-tcd64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.318032 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2acdb479-1c97-4508-b835-b04a0b0aa436" (UID: "2acdb479-1c97-4508-b835-b04a0b0aa436"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.340746 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-config-data" (OuterVolumeSpecName: "config-data") pod "2acdb479-1c97-4508-b835-b04a0b0aa436" (UID: "2acdb479-1c97-4508-b835-b04a0b0aa436"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.395720 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.395756 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcd64\" (UniqueName: \"kubernetes.io/projected/2acdb479-1c97-4508-b835-b04a0b0aa436-kube-api-access-tcd64\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.395770 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acdb479-1c97-4508-b835-b04a0b0aa436-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.908384 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v8c8m" event={"ID":"2acdb479-1c97-4508-b835-b04a0b0aa436","Type":"ContainerDied","Data":"32e68b7ee6e18e6a467ca8e533d427a2c35524028fb05f29a9f5f61f0c0eedfb"} Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.908747 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e68b7ee6e18e6a467ca8e533d427a2c35524028fb05f29a9f5f61f0c0eedfb" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.908465 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v8c8m" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.911022 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" event={"ID":"fc71ac41-c9cc-4b76-8251-97303837cc30","Type":"ContainerStarted","Data":"e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552"} Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.911189 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.913146 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5782v" event={"ID":"f1eed840-68dc-40c2-b2d7-3d3b350b9a12","Type":"ContainerStarted","Data":"67131b831d7bbc7854d3ec550e2e376edea9a5e823c31d4b239e4adbce5fe1ef"} Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.978885 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" podStartSLOduration=2.978860968 podStartE2EDuration="2.978860968s" podCreationTimestamp="2026-01-26 18:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:51.948192095 +0000 UTC m=+1140.655328228" watchObservedRunningTime="2026-01-26 18:02:51.978860968 +0000 UTC m=+1140.685997101" Jan 26 18:02:51 crc kubenswrapper[4787]: I0126 18:02:51.988172 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5782v" podStartSLOduration=2.22346055 podStartE2EDuration="34.988145534s" podCreationTimestamp="2026-01-26 18:02:17 +0000 UTC" firstStartedPulling="2026-01-26 18:02:18.49735083 +0000 UTC m=+1107.204486983" lastFinishedPulling="2026-01-26 18:02:51.262035844 +0000 UTC m=+1139.969171967" observedRunningTime="2026-01-26 18:02:51.972047903 +0000 UTC m=+1140.679184046" watchObservedRunningTime="2026-01-26 18:02:51.988145534 +0000 UTC m=+1140.695281667" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.203047 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kgptn"] Jan 26 18:02:52 crc kubenswrapper[4787]: E0126 18:02:52.203494 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acdb479-1c97-4508-b835-b04a0b0aa436" containerName="keystone-db-sync" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.203514 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acdb479-1c97-4508-b835-b04a0b0aa436" containerName="keystone-db-sync" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.203662 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acdb479-1c97-4508-b835-b04a0b0aa436" containerName="keystone-db-sync" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.204227 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.210309 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.210584 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.210733 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.210892 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.211031 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4s9tn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.216179 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-f8n2s"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.240370 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kgptn"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.309083 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767d96458c-v9g9h"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310030 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69v5z\" (UniqueName: \"kubernetes.io/projected/bd290409-f15e-481c-82ec-1e1638821b4e-kube-api-access-69v5z\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310061 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-config-data\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-credential-keys\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310144 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-scripts\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-combined-ca-bundle\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310191 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-fernet-keys\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.310752 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.333800 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-v9g9h"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.384261 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pjrz5"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.385311 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.392061 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qtc2j" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.392271 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.392397 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.402179 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pjrz5"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.411437 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.411679 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69v5z\" (UniqueName: \"kubernetes.io/projected/bd290409-f15e-481c-82ec-1e1638821b4e-kube-api-access-69v5z\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.411808 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-config-data\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.411941 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412088 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-svc\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412191 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-credential-keys\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412263 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9fb\" (UniqueName: \"kubernetes.io/projected/5e4204d1-e6fb-47e9-be15-b269b19b641e-kube-api-access-fv9fb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-scripts\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-combined-ca-bundle\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-fernet-keys\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.412805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-config\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.430065 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-config-data\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.444726 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69v5z\" (UniqueName: \"kubernetes.io/projected/bd290409-f15e-481c-82ec-1e1638821b4e-kube-api-access-69v5z\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.446998 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-fernet-keys\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.458246 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-combined-ca-bundle\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.459332 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-credential-keys\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.469551 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-scripts\") pod \"keystone-bootstrap-kgptn\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.482070 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zjhv2"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.483101 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.491048 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.491272 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.491535 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pcnlj" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515058 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-config-data\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515256 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-config\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515353 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515430 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-combined-ca-bundle\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515499 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d251ab02-33c7-41d6-806e-3a80f332c86f-etc-machine-id\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515683 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-svc\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv9fb\" (UniqueName: \"kubernetes.io/projected/5e4204d1-e6fb-47e9-be15-b269b19b641e-kube-api-access-fv9fb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515859 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-scripts\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.515927 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5vd\" (UniqueName: \"kubernetes.io/projected/d251ab02-33c7-41d6-806e-3a80f332c86f-kube-api-access-zq5vd\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.516034 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-db-sync-config-data\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.516101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.516910 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.517623 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-config\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.518196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.518798 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.520396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-svc\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.520897 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.522400 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zjhv2"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.560647 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-gwmwp"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.561887 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.563096 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv9fb\" (UniqueName: \"kubernetes.io/projected/5e4204d1-e6fb-47e9-be15-b269b19b641e-kube-api-access-fv9fb\") pod \"dnsmasq-dns-767d96458c-v9g9h\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.566750 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5rhf4" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.567526 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.572775 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gwmwp"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.608961 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-v9g9h"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.609681 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.617845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-combined-ca-bundle\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.617903 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-db-sync-config-data\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.617931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d251ab02-33c7-41d6-806e-3a80f332c86f-etc-machine-id\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.617994 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdrf\" (UniqueName: \"kubernetes.io/projected/1769639e-ba6d-4725-9012-91e6965c4cc0-kube-api-access-6tdrf\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618027 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-scripts\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-config\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618068 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5vd\" (UniqueName: \"kubernetes.io/projected/d251ab02-33c7-41d6-806e-3a80f332c86f-kube-api-access-zq5vd\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618092 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-combined-ca-bundle\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618113 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-db-sync-config-data\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mbcj\" (UniqueName: \"kubernetes.io/projected/cb6adff5-a812-45ee-a03d-89db150f295f-kube-api-access-7mbcj\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618164 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-config-data\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618217 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-combined-ca-bundle\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.618327 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d251ab02-33c7-41d6-806e-3a80f332c86f-etc-machine-id\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.623020 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-combined-ca-bundle\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.634183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-scripts\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.638848 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-config-data\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.650580 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.653213 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.655111 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-db-sync-config-data\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.655259 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.657665 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.658939 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.667620 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.669256 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.674605 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.711721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5vd\" (UniqueName: \"kubernetes.io/projected/d251ab02-33c7-41d6-806e-3a80f332c86f-kube-api-access-zq5vd\") pod \"cinder-db-sync-pjrz5\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.727115 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mvxff"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.731422 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.734424 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.735648 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.736173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mmprd" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.738096 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mvxff"] Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.739752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tdrf\" (UniqueName: \"kubernetes.io/projected/1769639e-ba6d-4725-9012-91e6965c4cc0-kube-api-access-6tdrf\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.739794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.739821 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwdx\" (UniqueName: \"kubernetes.io/projected/d9ed4e15-701b-4f52-9b6d-04e44d578960-kube-api-access-rcwdx\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.739849 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.739879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-log-httpd\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.739913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-config\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.745627 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-config\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.754850 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.758167 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-combined-ca-bundle\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.758229 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-config-data\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.758272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mbcj\" (UniqueName: \"kubernetes.io/projected/cb6adff5-a812-45ee-a03d-89db150f295f-kube-api-access-7mbcj\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.758334 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-swift-storage-0\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.764371 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.764473 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-svc\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.764506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-run-httpd\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.771000 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-combined-ca-bundle\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.771082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-db-sync-config-data\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.771106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-scripts\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.771131 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.771168 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-config\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.771258 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ccn\" (UniqueName: \"kubernetes.io/projected/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-kube-api-access-m2ccn\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.784545 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tdrf\" (UniqueName: \"kubernetes.io/projected/1769639e-ba6d-4725-9012-91e6965c4cc0-kube-api-access-6tdrf\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.787552 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-combined-ca-bundle\") pod \"neutron-db-sync-zjhv2\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.795377 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-db-sync-config-data\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.796084 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-combined-ca-bundle\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.800183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mbcj\" (UniqueName: \"kubernetes.io/projected/cb6adff5-a812-45ee-a03d-89db150f295f-kube-api-access-7mbcj\") pod \"barbican-db-sync-gwmwp\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.804856 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.848777 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.873903 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-config-data\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.873989 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ccn\" (UniqueName: \"kubernetes.io/projected/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-kube-api-access-m2ccn\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874020 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874039 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwdx\" (UniqueName: \"kubernetes.io/projected/d9ed4e15-701b-4f52-9b6d-04e44d578960-kube-api-access-rcwdx\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874054 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874076 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-log-httpd\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874111 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-combined-ca-bundle\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874144 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-config-data\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874189 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-swift-storage-0\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-svc\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874256 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-run-httpd\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874310 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c19d2c-b2f5-4c8e-964b-39af5b632525-logs\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874360 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-scripts\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874378 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874401 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-config\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874426 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q8jl\" (UniqueName: \"kubernetes.io/projected/b9c19d2c-b2f5-4c8e-964b-39af5b632525-kube-api-access-2q8jl\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.874452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-scripts\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.875887 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-swift-storage-0\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.876258 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.876893 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-svc\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.877238 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-run-httpd\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.878557 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.880788 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-log-httpd\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.881077 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-config\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.885175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.886081 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.898024 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-scripts\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.898839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-config-data\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.901371 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwdx\" (UniqueName: \"kubernetes.io/projected/d9ed4e15-701b-4f52-9b6d-04e44d578960-kube-api-access-rcwdx\") pod \"dnsmasq-dns-7fc6d4ffc7-2bs6h\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.903854 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ccn\" (UniqueName: \"kubernetes.io/projected/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-kube-api-access-m2ccn\") pod \"ceilometer-0\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " pod="openstack/ceilometer-0" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.975597 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c19d2c-b2f5-4c8e-964b-39af5b632525-logs\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.975688 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q8jl\" (UniqueName: \"kubernetes.io/projected/b9c19d2c-b2f5-4c8e-964b-39af5b632525-kube-api-access-2q8jl\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.975713 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-scripts\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.975740 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-config-data\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.975826 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-combined-ca-bundle\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.976542 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c19d2c-b2f5-4c8e-964b-39af5b632525-logs\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.985743 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-scripts\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.985796 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-combined-ca-bundle\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:52 crc kubenswrapper[4787]: I0126 18:02:52.990340 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-config-data\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.009549 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q8jl\" (UniqueName: \"kubernetes.io/projected/b9c19d2c-b2f5-4c8e-964b-39af5b632525-kube-api-access-2q8jl\") pod \"placement-db-sync-mvxff\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.130210 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.149872 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.164023 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mvxff" Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.258577 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kgptn"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.620762 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pjrz5"] Jan 26 18:02:53 crc kubenswrapper[4787]: W0126 18:02:53.624536 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd251ab02_33c7_41d6_806e_3a80f332c86f.slice/crio-7e3e44d91bf16ada944a598561d67a61a6d11e12a474f9251f001bfeb03a96ec WatchSource:0}: Error finding container 7e3e44d91bf16ada944a598561d67a61a6d11e12a474f9251f001bfeb03a96ec: Status 404 returned error can't find the container with id 7e3e44d91bf16ada944a598561d67a61a6d11e12a474f9251f001bfeb03a96ec Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.650796 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zjhv2"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.685664 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-v9g9h"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.699733 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gwmwp"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.764279 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.783810 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.878860 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mvxff"] Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.943723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gwmwp" event={"ID":"cb6adff5-a812-45ee-a03d-89db150f295f","Type":"ContainerStarted","Data":"b2b89cca3a8a82602c6ca257a7ad920d61e744c787fec1fc0d042ffbf4633f64"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.945397 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pjrz5" event={"ID":"d251ab02-33c7-41d6-806e-3a80f332c86f","Type":"ContainerStarted","Data":"7e3e44d91bf16ada944a598561d67a61a6d11e12a474f9251f001bfeb03a96ec"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.948456 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" event={"ID":"5e4204d1-e6fb-47e9-be15-b269b19b641e","Type":"ContainerStarted","Data":"95b07b4b49d5dcb2d594d71182d5ec02aef2db03ecba38006a3ece709831e034"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.949818 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerStarted","Data":"48ed0615e52f3293b833c42d8383405079eabd1d02b058cbfb5e19c1d61b1adc"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.951052 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mvxff" event={"ID":"b9c19d2c-b2f5-4c8e-964b-39af5b632525","Type":"ContainerStarted","Data":"3278449ec78b6cc86a0faec99d763e97b5ed39c9741e6a2a4005d2b05a33732c"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.953121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kgptn" event={"ID":"bd290409-f15e-481c-82ec-1e1638821b4e","Type":"ContainerStarted","Data":"6e211d25eb1387ae0d55fb3fd2f9760e74b3357db9c68e31d9f12e4ee86167d6"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.953164 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kgptn" event={"ID":"bd290409-f15e-481c-82ec-1e1638821b4e","Type":"ContainerStarted","Data":"c2a60bb4c9ff9a7bbed5f68e718066c44070e38c87015350c5a17a300e154897"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.954326 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" event={"ID":"d9ed4e15-701b-4f52-9b6d-04e44d578960","Type":"ContainerStarted","Data":"092f4bd32b5ab6901db6d891b1bdab11f0e9787af9459f9eb91380c9c6e24a5d"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.956275 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerName="dnsmasq-dns" containerID="cri-o://e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552" gracePeriod=10 Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.956652 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjhv2" event={"ID":"1769639e-ba6d-4725-9012-91e6965c4cc0","Type":"ContainerStarted","Data":"2518765dd68036f7df21513699c36d87c235d1492281e826ea216799d8581b9e"} Jan 26 18:02:53 crc kubenswrapper[4787]: I0126 18:02:53.989057 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kgptn" podStartSLOduration=1.989032673 podStartE2EDuration="1.989032673s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:53.972017621 +0000 UTC m=+1142.679153754" watchObservedRunningTime="2026-01-26 18:02:53.989032673 +0000 UTC m=+1142.696168826" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.465881 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.611877 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-svc\") pod \"fc71ac41-c9cc-4b76-8251-97303837cc30\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.611988 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-swift-storage-0\") pod \"fc71ac41-c9cc-4b76-8251-97303837cc30\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.612020 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-sb\") pod \"fc71ac41-c9cc-4b76-8251-97303837cc30\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.612054 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkv9p\" (UniqueName: \"kubernetes.io/projected/fc71ac41-c9cc-4b76-8251-97303837cc30-kube-api-access-rkv9p\") pod \"fc71ac41-c9cc-4b76-8251-97303837cc30\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.612095 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-nb\") pod \"fc71ac41-c9cc-4b76-8251-97303837cc30\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.612206 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-config\") pod \"fc71ac41-c9cc-4b76-8251-97303837cc30\" (UID: \"fc71ac41-c9cc-4b76-8251-97303837cc30\") " Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.634213 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc71ac41-c9cc-4b76-8251-97303837cc30-kube-api-access-rkv9p" (OuterVolumeSpecName: "kube-api-access-rkv9p") pod "fc71ac41-c9cc-4b76-8251-97303837cc30" (UID: "fc71ac41-c9cc-4b76-8251-97303837cc30"). InnerVolumeSpecName "kube-api-access-rkv9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.686332 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fc71ac41-c9cc-4b76-8251-97303837cc30" (UID: "fc71ac41-c9cc-4b76-8251-97303837cc30"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.715444 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.715480 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkv9p\" (UniqueName: \"kubernetes.io/projected/fc71ac41-c9cc-4b76-8251-97303837cc30-kube-api-access-rkv9p\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.719546 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc71ac41-c9cc-4b76-8251-97303837cc30" (UID: "fc71ac41-c9cc-4b76-8251-97303837cc30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.731613 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc71ac41-c9cc-4b76-8251-97303837cc30" (UID: "fc71ac41-c9cc-4b76-8251-97303837cc30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.733098 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc71ac41-c9cc-4b76-8251-97303837cc30" (UID: "fc71ac41-c9cc-4b76-8251-97303837cc30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.741362 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-config" (OuterVolumeSpecName: "config") pod "fc71ac41-c9cc-4b76-8251-97303837cc30" (UID: "fc71ac41-c9cc-4b76-8251-97303837cc30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.817300 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.817343 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.817359 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.817371 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc71ac41-c9cc-4b76-8251-97303837cc30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.984101 4787 generic.go:334] "Generic (PLEG): container finished" podID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerID="e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552" exitCode=0 Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.984148 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.984166 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" event={"ID":"fc71ac41-c9cc-4b76-8251-97303837cc30","Type":"ContainerDied","Data":"e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552"} Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.984462 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-f8n2s" event={"ID":"fc71ac41-c9cc-4b76-8251-97303837cc30","Type":"ContainerDied","Data":"6350e6055a3a7883b1ab12357fd98c3b9f4915ce065ebc0a624da39cfce25cec"} Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.984480 4787 scope.go:117] "RemoveContainer" containerID="e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552" Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.987771 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e4204d1-e6fb-47e9-be15-b269b19b641e" containerID="1a3887658fe4eb401eee77fdea8521f9c18d1ee2435bcfb461ff7e8bdaf683d9" exitCode=0 Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.987934 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" event={"ID":"5e4204d1-e6fb-47e9-be15-b269b19b641e","Type":"ContainerDied","Data":"1a3887658fe4eb401eee77fdea8521f9c18d1ee2435bcfb461ff7e8bdaf683d9"} Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.990421 4787 generic.go:334] "Generic (PLEG): container finished" podID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerID="77c6b49201eea341e0899453197149b3324357d891fd059ba0d400ee1b6ebe14" exitCode=0 Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.990459 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" event={"ID":"d9ed4e15-701b-4f52-9b6d-04e44d578960","Type":"ContainerDied","Data":"77c6b49201eea341e0899453197149b3324357d891fd059ba0d400ee1b6ebe14"} Jan 26 18:02:54 crc kubenswrapper[4787]: I0126 18:02:54.995841 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjhv2" event={"ID":"1769639e-ba6d-4725-9012-91e6965c4cc0","Type":"ContainerStarted","Data":"78fd7352dd4bb70eb277dc141504486ede9cc9f8e80b4b963e4e1a0e6c7a32fe"} Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.125328 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zjhv2" podStartSLOduration=3.123155636 podStartE2EDuration="3.123155636s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:55.060646972 +0000 UTC m=+1143.767783105" watchObservedRunningTime="2026-01-26 18:02:55.123155636 +0000 UTC m=+1143.830291769" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.183321 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-f8n2s"] Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.191435 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-f8n2s"] Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.236232 4787 scope.go:117] "RemoveContainer" containerID="3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.361876 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.367798 4787 scope.go:117] "RemoveContainer" containerID="e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552" Jan 26 18:02:55 crc kubenswrapper[4787]: E0126 18:02:55.413843 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552\": container with ID starting with e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552 not found: ID does not exist" containerID="e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.413901 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552"} err="failed to get container status \"e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552\": rpc error: code = NotFound desc = could not find container \"e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552\": container with ID starting with e8c1099ad43951654088df7f55e550b56291adad37c664db90c693df57b6f552 not found: ID does not exist" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.413937 4787 scope.go:117] "RemoveContainer" containerID="3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865" Jan 26 18:02:55 crc kubenswrapper[4787]: E0126 18:02:55.419472 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865\": container with ID starting with 3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865 not found: ID does not exist" containerID="3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.419539 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865"} err="failed to get container status \"3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865\": rpc error: code = NotFound desc = could not find container \"3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865\": container with ID starting with 3a47501813702700e09a2c1a54aa58af0f768c600fd883abd76107304d44f865 not found: ID does not exist" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.428433 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.544792 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-swift-storage-0\") pod \"5e4204d1-e6fb-47e9-be15-b269b19b641e\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.544892 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv9fb\" (UniqueName: \"kubernetes.io/projected/5e4204d1-e6fb-47e9-be15-b269b19b641e-kube-api-access-fv9fb\") pod \"5e4204d1-e6fb-47e9-be15-b269b19b641e\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.544994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-nb\") pod \"5e4204d1-e6fb-47e9-be15-b269b19b641e\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.545054 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-svc\") pod \"5e4204d1-e6fb-47e9-be15-b269b19b641e\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.545163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-sb\") pod \"5e4204d1-e6fb-47e9-be15-b269b19b641e\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.545221 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-config\") pod \"5e4204d1-e6fb-47e9-be15-b269b19b641e\" (UID: \"5e4204d1-e6fb-47e9-be15-b269b19b641e\") " Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.555265 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4204d1-e6fb-47e9-be15-b269b19b641e-kube-api-access-fv9fb" (OuterVolumeSpecName: "kube-api-access-fv9fb") pod "5e4204d1-e6fb-47e9-be15-b269b19b641e" (UID: "5e4204d1-e6fb-47e9-be15-b269b19b641e"). InnerVolumeSpecName "kube-api-access-fv9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.570786 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e4204d1-e6fb-47e9-be15-b269b19b641e" (UID: "5e4204d1-e6fb-47e9-be15-b269b19b641e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.571372 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5e4204d1-e6fb-47e9-be15-b269b19b641e" (UID: "5e4204d1-e6fb-47e9-be15-b269b19b641e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.585252 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-config" (OuterVolumeSpecName: "config") pod "5e4204d1-e6fb-47e9-be15-b269b19b641e" (UID: "5e4204d1-e6fb-47e9-be15-b269b19b641e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.590201 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e4204d1-e6fb-47e9-be15-b269b19b641e" (UID: "5e4204d1-e6fb-47e9-be15-b269b19b641e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.592886 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e4204d1-e6fb-47e9-be15-b269b19b641e" (UID: "5e4204d1-e6fb-47e9-be15-b269b19b641e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.603358 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" path="/var/lib/kubelet/pods/fc71ac41-c9cc-4b76-8251-97303837cc30/volumes" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.647440 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.647469 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.647480 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.647510 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.647521 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv9fb\" (UniqueName: \"kubernetes.io/projected/5e4204d1-e6fb-47e9-be15-b269b19b641e-kube-api-access-fv9fb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:55 crc kubenswrapper[4787]: I0126 18:02:55.647529 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e4204d1-e6fb-47e9-be15-b269b19b641e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:02:56 crc kubenswrapper[4787]: I0126 18:02:56.012550 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" event={"ID":"5e4204d1-e6fb-47e9-be15-b269b19b641e","Type":"ContainerDied","Data":"95b07b4b49d5dcb2d594d71182d5ec02aef2db03ecba38006a3ece709831e034"} Jan 26 18:02:56 crc kubenswrapper[4787]: I0126 18:02:56.012610 4787 scope.go:117] "RemoveContainer" containerID="1a3887658fe4eb401eee77fdea8521f9c18d1ee2435bcfb461ff7e8bdaf683d9" Jan 26 18:02:56 crc kubenswrapper[4787]: I0126 18:02:56.012731 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-v9g9h" Jan 26 18:02:56 crc kubenswrapper[4787]: I0126 18:02:56.018201 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" event={"ID":"d9ed4e15-701b-4f52-9b6d-04e44d578960","Type":"ContainerStarted","Data":"7e077c3313483c663cd7c742a40a4f8cad42f7c629fac4120493b64571603b77"} Jan 26 18:02:56 crc kubenswrapper[4787]: I0126 18:02:56.107076 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-v9g9h"] Jan 26 18:02:56 crc kubenswrapper[4787]: I0126 18:02:56.122202 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-v9g9h"] Jan 26 18:02:57 crc kubenswrapper[4787]: I0126 18:02:57.037632 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:02:57 crc kubenswrapper[4787]: I0126 18:02:57.078427 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" podStartSLOduration=5.07840846 podStartE2EDuration="5.07840846s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:02:57.070781315 +0000 UTC m=+1145.777917448" watchObservedRunningTime="2026-01-26 18:02:57.07840846 +0000 UTC m=+1145.785544593" Jan 26 18:02:57 crc kubenswrapper[4787]: I0126 18:02:57.605979 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4204d1-e6fb-47e9-be15-b269b19b641e" path="/var/lib/kubelet/pods/5e4204d1-e6fb-47e9-be15-b269b19b641e/volumes" Jan 26 18:02:59 crc kubenswrapper[4787]: I0126 18:02:59.070308 4787 generic.go:334] "Generic (PLEG): container finished" podID="bd290409-f15e-481c-82ec-1e1638821b4e" containerID="6e211d25eb1387ae0d55fb3fd2f9760e74b3357db9c68e31d9f12e4ee86167d6" exitCode=0 Jan 26 18:02:59 crc kubenswrapper[4787]: I0126 18:02:59.070369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kgptn" event={"ID":"bd290409-f15e-481c-82ec-1e1638821b4e","Type":"ContainerDied","Data":"6e211d25eb1387ae0d55fb3fd2f9760e74b3357db9c68e31d9f12e4ee86167d6"} Jan 26 18:03:03 crc kubenswrapper[4787]: I0126 18:03:03.152409 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:03:03 crc kubenswrapper[4787]: I0126 18:03:03.211573 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ffbkk"] Jan 26 18:03:03 crc kubenswrapper[4787]: I0126 18:03:03.212233 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="dnsmasq-dns" containerID="cri-o://426f484ee66b6c1fd184b9a2bfce9584ab95e10af4fb724f3206c08d58bddff4" gracePeriod=10 Jan 26 18:03:04 crc kubenswrapper[4787]: I0126 18:03:04.116642 4787 generic.go:334] "Generic (PLEG): container finished" podID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerID="426f484ee66b6c1fd184b9a2bfce9584ab95e10af4fb724f3206c08d58bddff4" exitCode=0 Jan 26 18:03:04 crc kubenswrapper[4787]: I0126 18:03:04.116713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" event={"ID":"455b92c3-f9ce-4bdf-9472-b41e3f4a1443","Type":"ContainerDied","Data":"426f484ee66b6c1fd184b9a2bfce9584ab95e10af4fb724f3206c08d58bddff4"} Jan 26 18:03:04 crc kubenswrapper[4787]: I0126 18:03:04.485462 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.240729 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.328548 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-scripts\") pod \"bd290409-f15e-481c-82ec-1e1638821b4e\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.328675 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-config-data\") pod \"bd290409-f15e-481c-82ec-1e1638821b4e\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.328711 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-fernet-keys\") pod \"bd290409-f15e-481c-82ec-1e1638821b4e\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.328735 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-combined-ca-bundle\") pod \"bd290409-f15e-481c-82ec-1e1638821b4e\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.328766 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-credential-keys\") pod \"bd290409-f15e-481c-82ec-1e1638821b4e\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.328844 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69v5z\" (UniqueName: \"kubernetes.io/projected/bd290409-f15e-481c-82ec-1e1638821b4e-kube-api-access-69v5z\") pod \"bd290409-f15e-481c-82ec-1e1638821b4e\" (UID: \"bd290409-f15e-481c-82ec-1e1638821b4e\") " Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.334874 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-scripts" (OuterVolumeSpecName: "scripts") pod "bd290409-f15e-481c-82ec-1e1638821b4e" (UID: "bd290409-f15e-481c-82ec-1e1638821b4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.336114 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bd290409-f15e-481c-82ec-1e1638821b4e" (UID: "bd290409-f15e-481c-82ec-1e1638821b4e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.336776 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd290409-f15e-481c-82ec-1e1638821b4e-kube-api-access-69v5z" (OuterVolumeSpecName: "kube-api-access-69v5z") pod "bd290409-f15e-481c-82ec-1e1638821b4e" (UID: "bd290409-f15e-481c-82ec-1e1638821b4e"). InnerVolumeSpecName "kube-api-access-69v5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.336929 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bd290409-f15e-481c-82ec-1e1638821b4e" (UID: "bd290409-f15e-481c-82ec-1e1638821b4e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.359512 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd290409-f15e-481c-82ec-1e1638821b4e" (UID: "bd290409-f15e-481c-82ec-1e1638821b4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.387340 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-config-data" (OuterVolumeSpecName: "config-data") pod "bd290409-f15e-481c-82ec-1e1638821b4e" (UID: "bd290409-f15e-481c-82ec-1e1638821b4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.430877 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69v5z\" (UniqueName: \"kubernetes.io/projected/bd290409-f15e-481c-82ec-1e1638821b4e-kube-api-access-69v5z\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.430923 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.430936 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.431166 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.431177 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:05 crc kubenswrapper[4787]: I0126 18:03:05.431187 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd290409-f15e-481c-82ec-1e1638821b4e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.140495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kgptn" event={"ID":"bd290409-f15e-481c-82ec-1e1638821b4e","Type":"ContainerDied","Data":"c2a60bb4c9ff9a7bbed5f68e718066c44070e38c87015350c5a17a300e154897"} Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.140796 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a60bb4c9ff9a7bbed5f68e718066c44070e38c87015350c5a17a300e154897" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.140584 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kgptn" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.317911 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kgptn"] Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.325930 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kgptn"] Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.413383 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x45gw"] Jan 26 18:03:06 crc kubenswrapper[4787]: E0126 18:03:06.413754 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerName="init" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.413778 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerName="init" Jan 26 18:03:06 crc kubenswrapper[4787]: E0126 18:03:06.413796 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4204d1-e6fb-47e9-be15-b269b19b641e" containerName="init" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.413802 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4204d1-e6fb-47e9-be15-b269b19b641e" containerName="init" Jan 26 18:03:06 crc kubenswrapper[4787]: E0126 18:03:06.413818 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd290409-f15e-481c-82ec-1e1638821b4e" containerName="keystone-bootstrap" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.413825 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd290409-f15e-481c-82ec-1e1638821b4e" containerName="keystone-bootstrap" Jan 26 18:03:06 crc kubenswrapper[4787]: E0126 18:03:06.413838 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerName="dnsmasq-dns" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.413845 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerName="dnsmasq-dns" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.414011 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4204d1-e6fb-47e9-be15-b269b19b641e" containerName="init" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.414027 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd290409-f15e-481c-82ec-1e1638821b4e" containerName="keystone-bootstrap" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.414040 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc71ac41-c9cc-4b76-8251-97303837cc30" containerName="dnsmasq-dns" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.414587 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.421624 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.421750 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.421988 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4s9tn" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.421994 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.422096 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.429345 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x45gw"] Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.449043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-scripts\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.449238 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-fernet-keys\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.449297 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjtk\" (UniqueName: \"kubernetes.io/projected/dc5fb4a7-fca6-412e-819e-dac6667d92d6-kube-api-access-chjtk\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.449415 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-credential-keys\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.449465 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-combined-ca-bundle\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.449538 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-config-data\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.550976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-credential-keys\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.551031 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-combined-ca-bundle\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.551073 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-config-data\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.551115 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-scripts\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.551242 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-fernet-keys\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.551269 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjtk\" (UniqueName: \"kubernetes.io/projected/dc5fb4a7-fca6-412e-819e-dac6667d92d6-kube-api-access-chjtk\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.556006 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-scripts\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.556188 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-fernet-keys\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.557101 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-config-data\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.557451 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-combined-ca-bundle\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.561753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-credential-keys\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.576928 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjtk\" (UniqueName: \"kubernetes.io/projected/dc5fb4a7-fca6-412e-819e-dac6667d92d6-kube-api-access-chjtk\") pod \"keystone-bootstrap-x45gw\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:06 crc kubenswrapper[4787]: I0126 18:03:06.740029 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:07 crc kubenswrapper[4787]: I0126 18:03:07.601059 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd290409-f15e-481c-82ec-1e1638821b4e" path="/var/lib/kubelet/pods/bd290409-f15e-481c-82ec-1e1638821b4e/volumes" Jan 26 18:03:09 crc kubenswrapper[4787]: I0126 18:03:09.485767 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Jan 26 18:03:12 crc kubenswrapper[4787]: E0126 18:03:12.822535 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 26 18:03:12 crc kubenswrapper[4787]: E0126 18:03:12.823181 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-gwmwp_openstack(cb6adff5-a812-45ee-a03d-89db150f295f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:03:12 crc kubenswrapper[4787]: E0126 18:03:12.824540 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-gwmwp" podUID="cb6adff5-a812-45ee-a03d-89db150f295f" Jan 26 18:03:13 crc kubenswrapper[4787]: E0126 18:03:13.196297 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-gwmwp" podUID="cb6adff5-a812-45ee-a03d-89db150f295f" Jan 26 18:03:13 crc kubenswrapper[4787]: E0126 18:03:13.835460 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 26 18:03:13 crc kubenswrapper[4787]: E0126 18:03:13.835876 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zq5vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pjrz5_openstack(d251ab02-33c7-41d6-806e-3a80f332c86f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:03:13 crc kubenswrapper[4787]: E0126 18:03:13.837151 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pjrz5" podUID="d251ab02-33c7-41d6-806e-3a80f332c86f" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.078268 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.195934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-nb\") pod \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.196139 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-dns-svc\") pod \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.196165 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-sb\") pod \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.196203 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-config\") pod \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.196275 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqlsv\" (UniqueName: \"kubernetes.io/projected/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-kube-api-access-dqlsv\") pod \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\" (UID: \"455b92c3-f9ce-4bdf-9472-b41e3f4a1443\") " Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.205614 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-kube-api-access-dqlsv" (OuterVolumeSpecName: "kube-api-access-dqlsv") pod "455b92c3-f9ce-4bdf-9472-b41e3f4a1443" (UID: "455b92c3-f9ce-4bdf-9472-b41e3f4a1443"). InnerVolumeSpecName "kube-api-access-dqlsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.210439 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.210440 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ffbkk" event={"ID":"455b92c3-f9ce-4bdf-9472-b41e3f4a1443","Type":"ContainerDied","Data":"18f63c0ae886993d5f1f13a6177fa9ac320257ba39d786eb54fb71e52eb15578"} Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.210899 4787 scope.go:117] "RemoveContainer" containerID="426f484ee66b6c1fd184b9a2bfce9584ab95e10af4fb724f3206c08d58bddff4" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.214553 4787 generic.go:334] "Generic (PLEG): container finished" podID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" containerID="67131b831d7bbc7854d3ec550e2e376edea9a5e823c31d4b239e4adbce5fe1ef" exitCode=0 Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.215614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5782v" event={"ID":"f1eed840-68dc-40c2-b2d7-3d3b350b9a12","Type":"ContainerDied","Data":"67131b831d7bbc7854d3ec550e2e376edea9a5e823c31d4b239e4adbce5fe1ef"} Jan 26 18:03:14 crc kubenswrapper[4787]: E0126 18:03:14.217760 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-pjrz5" podUID="d251ab02-33c7-41d6-806e-3a80f332c86f" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.232972 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x45gw"] Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.272057 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "455b92c3-f9ce-4bdf-9472-b41e3f4a1443" (UID: "455b92c3-f9ce-4bdf-9472-b41e3f4a1443"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.277450 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "455b92c3-f9ce-4bdf-9472-b41e3f4a1443" (UID: "455b92c3-f9ce-4bdf-9472-b41e3f4a1443"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.279458 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-config" (OuterVolumeSpecName: "config") pod "455b92c3-f9ce-4bdf-9472-b41e3f4a1443" (UID: "455b92c3-f9ce-4bdf-9472-b41e3f4a1443"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.292531 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "455b92c3-f9ce-4bdf-9472-b41e3f4a1443" (UID: "455b92c3-f9ce-4bdf-9472-b41e3f4a1443"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.299064 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.299087 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.299098 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.299107 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.299116 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqlsv\" (UniqueName: \"kubernetes.io/projected/455b92c3-f9ce-4bdf-9472-b41e3f4a1443-kube-api-access-dqlsv\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.317777 4787 scope.go:117] "RemoveContainer" containerID="b6dfd0da4e4d4fb34e69e465d03475d48a1d41b876029450200f087dd7d882c5" Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.538662 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ffbkk"] Jan 26 18:03:14 crc kubenswrapper[4787]: I0126 18:03:14.544556 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ffbkk"] Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.226227 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x45gw" event={"ID":"dc5fb4a7-fca6-412e-819e-dac6667d92d6","Type":"ContainerStarted","Data":"34bbf27ab9cda47766dbce09363fdf4948eabf48cb2cb8e92ec94ea5da30de08"} Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.226288 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x45gw" event={"ID":"dc5fb4a7-fca6-412e-819e-dac6667d92d6","Type":"ContainerStarted","Data":"6a6c8a6ce4939c9b69bab0f12dc933e305ebd7ac9be1976a4475d4211544df0d"} Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.231263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerStarted","Data":"85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c"} Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.237217 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mvxff" event={"ID":"b9c19d2c-b2f5-4c8e-964b-39af5b632525","Type":"ContainerStarted","Data":"213534ee68f203e83fe67c526d43166c90766a50a742fdda59d2f15ece3e7409"} Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.256323 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x45gw" podStartSLOduration=9.256306482 podStartE2EDuration="9.256306482s" podCreationTimestamp="2026-01-26 18:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:15.253875293 +0000 UTC m=+1163.961011426" watchObservedRunningTime="2026-01-26 18:03:15.256306482 +0000 UTC m=+1163.963442615" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.600314 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" path="/var/lib/kubelet/pods/455b92c3-f9ce-4bdf-9472-b41e3f4a1443/volumes" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.708318 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5782v" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.731264 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mvxff" podStartSLOduration=3.833184305 podStartE2EDuration="23.731246267s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="2026-01-26 18:02:53.8857244 +0000 UTC m=+1142.592860523" lastFinishedPulling="2026-01-26 18:03:13.783786342 +0000 UTC m=+1162.490922485" observedRunningTime="2026-01-26 18:03:15.273864068 +0000 UTC m=+1163.981000201" watchObservedRunningTime="2026-01-26 18:03:15.731246267 +0000 UTC m=+1164.438382400" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.825713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-config-data\") pod \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.826118 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-combined-ca-bundle\") pod \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.826690 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg5kr\" (UniqueName: \"kubernetes.io/projected/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-kube-api-access-sg5kr\") pod \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.826748 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-db-sync-config-data\") pod \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\" (UID: \"f1eed840-68dc-40c2-b2d7-3d3b350b9a12\") " Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.842210 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f1eed840-68dc-40c2-b2d7-3d3b350b9a12" (UID: "f1eed840-68dc-40c2-b2d7-3d3b350b9a12"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.842909 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-kube-api-access-sg5kr" (OuterVolumeSpecName: "kube-api-access-sg5kr") pod "f1eed840-68dc-40c2-b2d7-3d3b350b9a12" (UID: "f1eed840-68dc-40c2-b2d7-3d3b350b9a12"). InnerVolumeSpecName "kube-api-access-sg5kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.867155 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1eed840-68dc-40c2-b2d7-3d3b350b9a12" (UID: "f1eed840-68dc-40c2-b2d7-3d3b350b9a12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.874187 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-config-data" (OuterVolumeSpecName: "config-data") pod "f1eed840-68dc-40c2-b2d7-3d3b350b9a12" (UID: "f1eed840-68dc-40c2-b2d7-3d3b350b9a12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.928890 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.928931 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg5kr\" (UniqueName: \"kubernetes.io/projected/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-kube-api-access-sg5kr\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.928970 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:15 crc kubenswrapper[4787]: I0126 18:03:15.928982 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1eed840-68dc-40c2-b2d7-3d3b350b9a12-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.264586 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5782v" event={"ID":"f1eed840-68dc-40c2-b2d7-3d3b350b9a12","Type":"ContainerDied","Data":"035be67d7222a47430da1132b4cb5a816786fc2ca32547c8f91813a55831f3f0"} Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.264627 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035be67d7222a47430da1132b4cb5a816786fc2ca32547c8f91813a55831f3f0" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.264715 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5782v" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.277693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerStarted","Data":"a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9"} Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.690565 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-p22wr"] Jan 26 18:03:16 crc kubenswrapper[4787]: E0126 18:03:16.691003 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" containerName="glance-db-sync" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.691019 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" containerName="glance-db-sync" Jan 26 18:03:16 crc kubenswrapper[4787]: E0126 18:03:16.691043 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="dnsmasq-dns" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.691050 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="dnsmasq-dns" Jan 26 18:03:16 crc kubenswrapper[4787]: E0126 18:03:16.691063 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="init" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.691070 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="init" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.691282 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="455b92c3-f9ce-4bdf-9472-b41e3f4a1443" containerName="dnsmasq-dns" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.691312 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" containerName="glance-db-sync" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.693623 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.705859 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-p22wr"] Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.849205 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.849294 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ncv\" (UniqueName: \"kubernetes.io/projected/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-kube-api-access-h4ncv\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.849335 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-config\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.849360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.849376 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.849526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.951086 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.951450 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.951514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ncv\" (UniqueName: \"kubernetes.io/projected/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-kube-api-access-h4ncv\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.951549 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-config\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.951575 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.951590 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.952362 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.952392 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.952690 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.952934 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.953178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-config\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:16 crc kubenswrapper[4787]: I0126 18:03:16.983585 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ncv\" (UniqueName: \"kubernetes.io/projected/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-kube-api-access-h4ncv\") pod \"dnsmasq-dns-6f6f8cb849-p22wr\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.017320 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.611352 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-p22wr"] Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.611619 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.625776 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.625890 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.635514 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.635688 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.636987 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xrhkx" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.777617 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.777677 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-config-data\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.777718 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.777751 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-scripts\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.777788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.777816 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n254\" (UniqueName: \"kubernetes.io/projected/f372f344-175c-40c2-aab5-ad30990e9b16-kube-api-access-5n254\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.778095 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-logs\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.840287 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.841582 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.844662 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.868858 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.879861 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-config-data\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.879920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.879960 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-scripts\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.879988 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.880013 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n254\" (UniqueName: \"kubernetes.io/projected/f372f344-175c-40c2-aab5-ad30990e9b16-kube-api-access-5n254\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.880064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-logs\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.880133 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.880429 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.882704 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-logs\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.883133 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.887463 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-config-data\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.891296 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.901898 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-scripts\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.909333 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.912798 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n254\" (UniqueName: \"kubernetes.io/projected/f372f344-175c-40c2-aab5-ad30990e9b16-kube-api-access-5n254\") pod \"glance-default-external-api-0\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981532 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981582 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6lf\" (UniqueName: \"kubernetes.io/projected/76870c9f-aa42-4f20-9576-a7afce840af0-kube-api-access-zn6lf\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981664 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-logs\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981683 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981701 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:17 crc kubenswrapper[4787]: I0126 18:03:17.981764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.005307 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.084916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-logs\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.084979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.084998 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.085037 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.085090 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.085114 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.085173 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6lf\" (UniqueName: \"kubernetes.io/projected/76870c9f-aa42-4f20-9576-a7afce840af0-kube-api-access-zn6lf\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.085904 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-logs\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.089265 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.089500 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.089895 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.094820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.098107 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.104011 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6lf\" (UniqueName: \"kubernetes.io/projected/76870c9f-aa42-4f20-9576-a7afce840af0-kube-api-access-zn6lf\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.127009 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.262021 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.310991 4787 generic.go:334] "Generic (PLEG): container finished" podID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerID="0242a6446f4c396a03b7fdd54c363335eec1edaed3cce989c3a0bc0c3fdaf390" exitCode=0 Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.311053 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" event={"ID":"bd1a3dca-b7f8-40d0-9608-e2a45af9348b","Type":"ContainerDied","Data":"0242a6446f4c396a03b7fdd54c363335eec1edaed3cce989c3a0bc0c3fdaf390"} Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.311112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" event={"ID":"bd1a3dca-b7f8-40d0-9608-e2a45af9348b","Type":"ContainerStarted","Data":"2d4666c78c7e15e9125a648a42333c4ffe8eb87c89a137086e8a55c01a2227ef"} Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.322634 4787 generic.go:334] "Generic (PLEG): container finished" podID="b9c19d2c-b2f5-4c8e-964b-39af5b632525" containerID="213534ee68f203e83fe67c526d43166c90766a50a742fdda59d2f15ece3e7409" exitCode=0 Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.322678 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mvxff" event={"ID":"b9c19d2c-b2f5-4c8e-964b-39af5b632525","Type":"ContainerDied","Data":"213534ee68f203e83fe67c526d43166c90766a50a742fdda59d2f15ece3e7409"} Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.415323 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:18 crc kubenswrapper[4787]: I0126 18:03:18.701510 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:18 crc kubenswrapper[4787]: W0126 18:03:18.704141 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76870c9f_aa42_4f20_9576_a7afce840af0.slice/crio-6e0bf9795ac10e66f38eabb491f7ef1e94a985286353812779dc87c3ed69009c WatchSource:0}: Error finding container 6e0bf9795ac10e66f38eabb491f7ef1e94a985286353812779dc87c3ed69009c: Status 404 returned error can't find the container with id 6e0bf9795ac10e66f38eabb491f7ef1e94a985286353812779dc87c3ed69009c Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.343529 4787 generic.go:334] "Generic (PLEG): container finished" podID="dc5fb4a7-fca6-412e-819e-dac6667d92d6" containerID="34bbf27ab9cda47766dbce09363fdf4948eabf48cb2cb8e92ec94ea5da30de08" exitCode=0 Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.343610 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x45gw" event={"ID":"dc5fb4a7-fca6-412e-819e-dac6667d92d6","Type":"ContainerDied","Data":"34bbf27ab9cda47766dbce09363fdf4948eabf48cb2cb8e92ec94ea5da30de08"} Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.347352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f372f344-175c-40c2-aab5-ad30990e9b16","Type":"ContainerStarted","Data":"cd80e8df0dd47ea93bb30ab2a75ba6e6acbe1bd36c92eeaba6f50b91198784de"} Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.347400 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f372f344-175c-40c2-aab5-ad30990e9b16","Type":"ContainerStarted","Data":"e7b70f209c8321de6a71260a40f748b08c499fc1b56be99d91bb0e2399a5212f"} Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.358477 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" event={"ID":"bd1a3dca-b7f8-40d0-9608-e2a45af9348b","Type":"ContainerStarted","Data":"a8ad3c863e44c6cb36d5f2929056302a930b9a9f30cd9e81a85a5cdd8fa99a03"} Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.358552 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.373888 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76870c9f-aa42-4f20-9576-a7afce840af0","Type":"ContainerStarted","Data":"6e0bf9795ac10e66f38eabb491f7ef1e94a985286353812779dc87c3ed69009c"} Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.393706 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" podStartSLOduration=3.39368754 podStartE2EDuration="3.39368754s" podCreationTimestamp="2026-01-26 18:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:19.391426774 +0000 UTC m=+1168.098562897" watchObservedRunningTime="2026-01-26 18:03:19.39368754 +0000 UTC m=+1168.100823673" Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.416629 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:19 crc kubenswrapper[4787]: I0126 18:03:19.474123 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:20 crc kubenswrapper[4787]: I0126 18:03:20.391972 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76870c9f-aa42-4f20-9576-a7afce840af0","Type":"ContainerStarted","Data":"a048355bf8102c3348b3554634ce1055f390edb3950b2502867a3c11ec4cadd4"} Jan 26 18:03:20 crc kubenswrapper[4787]: I0126 18:03:20.395626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f372f344-175c-40c2-aab5-ad30990e9b16","Type":"ContainerStarted","Data":"dfa177d24b7a8ea03438426cabe7e275b9742adb2e32b0270db31409b1d0a46d"} Jan 26 18:03:20 crc kubenswrapper[4787]: I0126 18:03:20.396038 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-log" containerID="cri-o://cd80e8df0dd47ea93bb30ab2a75ba6e6acbe1bd36c92eeaba6f50b91198784de" gracePeriod=30 Jan 26 18:03:20 crc kubenswrapper[4787]: I0126 18:03:20.396188 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-httpd" containerID="cri-o://dfa177d24b7a8ea03438426cabe7e275b9742adb2e32b0270db31409b1d0a46d" gracePeriod=30 Jan 26 18:03:20 crc kubenswrapper[4787]: I0126 18:03:20.417807 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.417788875 podStartE2EDuration="4.417788875s" podCreationTimestamp="2026-01-26 18:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:20.416202037 +0000 UTC m=+1169.123338170" watchObservedRunningTime="2026-01-26 18:03:20.417788875 +0000 UTC m=+1169.124925008" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.410254 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mvxff" event={"ID":"b9c19d2c-b2f5-4c8e-964b-39af5b632525","Type":"ContainerDied","Data":"3278449ec78b6cc86a0faec99d763e97b5ed39c9741e6a2a4005d2b05a33732c"} Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.410534 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3278449ec78b6cc86a0faec99d763e97b5ed39c9741e6a2a4005d2b05a33732c" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.412625 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x45gw" event={"ID":"dc5fb4a7-fca6-412e-819e-dac6667d92d6","Type":"ContainerDied","Data":"6a6c8a6ce4939c9b69bab0f12dc933e305ebd7ac9be1976a4475d4211544df0d"} Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.412675 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6c8a6ce4939c9b69bab0f12dc933e305ebd7ac9be1976a4475d4211544df0d" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.415301 4787 generic.go:334] "Generic (PLEG): container finished" podID="f372f344-175c-40c2-aab5-ad30990e9b16" containerID="dfa177d24b7a8ea03438426cabe7e275b9742adb2e32b0270db31409b1d0a46d" exitCode=0 Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.415329 4787 generic.go:334] "Generic (PLEG): container finished" podID="f372f344-175c-40c2-aab5-ad30990e9b16" containerID="cd80e8df0dd47ea93bb30ab2a75ba6e6acbe1bd36c92eeaba6f50b91198784de" exitCode=143 Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.415365 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f372f344-175c-40c2-aab5-ad30990e9b16","Type":"ContainerDied","Data":"dfa177d24b7a8ea03438426cabe7e275b9742adb2e32b0270db31409b1d0a46d"} Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.415406 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f372f344-175c-40c2-aab5-ad30990e9b16","Type":"ContainerDied","Data":"cd80e8df0dd47ea93bb30ab2a75ba6e6acbe1bd36c92eeaba6f50b91198784de"} Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.416589 4787 generic.go:334] "Generic (PLEG): container finished" podID="1769639e-ba6d-4725-9012-91e6965c4cc0" containerID="78fd7352dd4bb70eb277dc141504486ede9cc9f8e80b4b963e4e1a0e6c7a32fe" exitCode=0 Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.416614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjhv2" event={"ID":"1769639e-ba6d-4725-9012-91e6965c4cc0","Type":"ContainerDied","Data":"78fd7352dd4bb70eb277dc141504486ede9cc9f8e80b4b963e4e1a0e6c7a32fe"} Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.511803 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mvxff" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.517520 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.681786 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjtk\" (UniqueName: \"kubernetes.io/projected/dc5fb4a7-fca6-412e-819e-dac6667d92d6-kube-api-access-chjtk\") pod \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.682197 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-config-data\") pod \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.682774 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-config-data\") pod \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.682810 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-combined-ca-bundle\") pod \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.682879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-scripts\") pod \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.682994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c19d2c-b2f5-4c8e-964b-39af5b632525-logs\") pod \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.683034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-combined-ca-bundle\") pod \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.683113 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q8jl\" (UniqueName: \"kubernetes.io/projected/b9c19d2c-b2f5-4c8e-964b-39af5b632525-kube-api-access-2q8jl\") pod \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\" (UID: \"b9c19d2c-b2f5-4c8e-964b-39af5b632525\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.683147 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-scripts\") pod \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.683216 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-credential-keys\") pod \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.683247 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-fernet-keys\") pod \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\" (UID: \"dc5fb4a7-fca6-412e-819e-dac6667d92d6\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.684150 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c19d2c-b2f5-4c8e-964b-39af5b632525-logs" (OuterVolumeSpecName: "logs") pod "b9c19d2c-b2f5-4c8e-964b-39af5b632525" (UID: "b9c19d2c-b2f5-4c8e-964b-39af5b632525"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.684971 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c19d2c-b2f5-4c8e-964b-39af5b632525-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.687715 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5fb4a7-fca6-412e-819e-dac6667d92d6-kube-api-access-chjtk" (OuterVolumeSpecName: "kube-api-access-chjtk") pod "dc5fb4a7-fca6-412e-819e-dac6667d92d6" (UID: "dc5fb4a7-fca6-412e-819e-dac6667d92d6"). InnerVolumeSpecName "kube-api-access-chjtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.688343 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dc5fb4a7-fca6-412e-819e-dac6667d92d6" (UID: "dc5fb4a7-fca6-412e-819e-dac6667d92d6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.701725 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-scripts" (OuterVolumeSpecName: "scripts") pod "dc5fb4a7-fca6-412e-819e-dac6667d92d6" (UID: "dc5fb4a7-fca6-412e-819e-dac6667d92d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.702462 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-scripts" (OuterVolumeSpecName: "scripts") pod "b9c19d2c-b2f5-4c8e-964b-39af5b632525" (UID: "b9c19d2c-b2f5-4c8e-964b-39af5b632525"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.705047 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dc5fb4a7-fca6-412e-819e-dac6667d92d6" (UID: "dc5fb4a7-fca6-412e-819e-dac6667d92d6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.705202 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c19d2c-b2f5-4c8e-964b-39af5b632525-kube-api-access-2q8jl" (OuterVolumeSpecName: "kube-api-access-2q8jl") pod "b9c19d2c-b2f5-4c8e-964b-39af5b632525" (UID: "b9c19d2c-b2f5-4c8e-964b-39af5b632525"). InnerVolumeSpecName "kube-api-access-2q8jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.722883 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9c19d2c-b2f5-4c8e-964b-39af5b632525" (UID: "b9c19d2c-b2f5-4c8e-964b-39af5b632525"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787338 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787386 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjtk\" (UniqueName: \"kubernetes.io/projected/dc5fb4a7-fca6-412e-819e-dac6667d92d6-kube-api-access-chjtk\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787398 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787407 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787417 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q8jl\" (UniqueName: \"kubernetes.io/projected/b9c19d2c-b2f5-4c8e-964b-39af5b632525-kube-api-access-2q8jl\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787425 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.787434 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.801034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc5fb4a7-fca6-412e-819e-dac6667d92d6" (UID: "dc5fb4a7-fca6-412e-819e-dac6667d92d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.802334 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-config-data" (OuterVolumeSpecName: "config-data") pod "dc5fb4a7-fca6-412e-819e-dac6667d92d6" (UID: "dc5fb4a7-fca6-412e-819e-dac6667d92d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.805575 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-config-data" (OuterVolumeSpecName: "config-data") pod "b9c19d2c-b2f5-4c8e-964b-39af5b632525" (UID: "b9c19d2c-b2f5-4c8e-964b-39af5b632525"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.889542 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c19d2c-b2f5-4c8e-964b-39af5b632525-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.889584 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.889595 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb4a7-fca6-412e-819e-dac6667d92d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.904325 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990585 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-config-data\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990678 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-scripts\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-logs\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990783 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-httpd-run\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990813 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n254\" (UniqueName: \"kubernetes.io/projected/f372f344-175c-40c2-aab5-ad30990e9b16-kube-api-access-5n254\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990851 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-combined-ca-bundle\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.990879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"f372f344-175c-40c2-aab5-ad30990e9b16\" (UID: \"f372f344-175c-40c2-aab5-ad30990e9b16\") " Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.991779 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-logs" (OuterVolumeSpecName: "logs") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.992006 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.995351 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.995398 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f372f344-175c-40c2-aab5-ad30990e9b16-kube-api-access-5n254" (OuterVolumeSpecName: "kube-api-access-5n254") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "kube-api-access-5n254". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:21 crc kubenswrapper[4787]: I0126 18:03:21.997929 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-scripts" (OuterVolumeSpecName: "scripts") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.021003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.046191 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-config-data" (OuterVolumeSpecName: "config-data") pod "f372f344-175c-40c2-aab5-ad30990e9b16" (UID: "f372f344-175c-40c2-aab5-ad30990e9b16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.092863 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.093051 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.093183 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.093563 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f372f344-175c-40c2-aab5-ad30990e9b16-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.093700 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n254\" (UniqueName: \"kubernetes.io/projected/f372f344-175c-40c2-aab5-ad30990e9b16-kube-api-access-5n254\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.093803 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f372f344-175c-40c2-aab5-ad30990e9b16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.093938 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.114426 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.195250 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.426399 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f372f344-175c-40c2-aab5-ad30990e9b16","Type":"ContainerDied","Data":"e7b70f209c8321de6a71260a40f748b08c499fc1b56be99d91bb0e2399a5212f"} Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.427301 4787 scope.go:117] "RemoveContainer" containerID="dfa177d24b7a8ea03438426cabe7e275b9742adb2e32b0270db31409b1d0a46d" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.426629 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.430063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerStarted","Data":"5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5"} Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.445502 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mvxff" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.445529 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x45gw" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.445504 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76870c9f-aa42-4f20-9576-a7afce840af0","Type":"ContainerStarted","Data":"21a019db0bb17ee4a9909aebc04496e599180df3c9435faa99f094e7c6c9bf2e"} Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.456108 4787 scope.go:117] "RemoveContainer" containerID="cd80e8df0dd47ea93bb30ab2a75ba6e6acbe1bd36c92eeaba6f50b91198784de" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.483420 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.495387 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.509784 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:22 crc kubenswrapper[4787]: E0126 18:03:22.510316 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c19d2c-b2f5-4c8e-964b-39af5b632525" containerName="placement-db-sync" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510341 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c19d2c-b2f5-4c8e-964b-39af5b632525" containerName="placement-db-sync" Jan 26 18:03:22 crc kubenswrapper[4787]: E0126 18:03:22.510359 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5fb4a7-fca6-412e-819e-dac6667d92d6" containerName="keystone-bootstrap" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510368 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5fb4a7-fca6-412e-819e-dac6667d92d6" containerName="keystone-bootstrap" Jan 26 18:03:22 crc kubenswrapper[4787]: E0126 18:03:22.510387 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-httpd" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510396 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-httpd" Jan 26 18:03:22 crc kubenswrapper[4787]: E0126 18:03:22.510419 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-log" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510426 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-log" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510649 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5fb4a7-fca6-412e-819e-dac6667d92d6" containerName="keystone-bootstrap" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510671 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c19d2c-b2f5-4c8e-964b-39af5b632525" containerName="placement-db-sync" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510685 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-log" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.510708 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" containerName="glance-httpd" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.511851 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.515442 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.515647 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.519457 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605328 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605408 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605478 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605510 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605536 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-logs\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605583 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605638 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.605693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclh6\" (UniqueName: \"kubernetes.io/projected/a0385f16-3fcd-47b1-8018-96960e1193bf-kube-api-access-wclh6\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.651085 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69b7546858-x7nbv"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.652695 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.655443 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.656405 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.657469 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.657655 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.657802 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mmprd" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.660678 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76b5d4b8cb-wb8cc"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.662217 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.665237 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.665292 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.665321 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.665521 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.665685 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4s9tn" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.665895 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.668113 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69b7546858-x7nbv"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.693622 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76b5d4b8cb-wb8cc"] Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.708071 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.708644 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.708703 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclh6\" (UniqueName: \"kubernetes.io/projected/a0385f16-3fcd-47b1-8018-96960e1193bf-kube-api-access-wclh6\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.708813 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.708905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.709082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.709108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.709149 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-logs\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.723391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-logs\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.723967 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.724611 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.730894 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.731764 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.744567 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.756282 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclh6\" (UniqueName: \"kubernetes.io/projected/a0385f16-3fcd-47b1-8018-96960e1193bf-kube-api-access-wclh6\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.758329 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.762688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.822855 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-scripts\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.822965 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-combined-ca-bundle\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823003 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-public-tls-certs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-config-data\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-fernet-keys\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823148 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ddk\" (UniqueName: \"kubernetes.io/projected/d85cc53a-0132-4491-82a1-056badced30c-kube-api-access-57ddk\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823295 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-config-data\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823413 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-scripts\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823509 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8a7212-44ab-42f1-86be-8b79726fe4f8-logs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-internal-tls-certs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823616 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-public-tls-certs\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823648 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-internal-tls-certs\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823694 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-credential-keys\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823727 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft289\" (UniqueName: \"kubernetes.io/projected/3f8a7212-44ab-42f1-86be-8b79726fe4f8-kube-api-access-ft289\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.823804 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-combined-ca-bundle\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.835464 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.924939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-public-tls-certs\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925010 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-internal-tls-certs\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925030 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-credential-keys\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925050 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft289\" (UniqueName: \"kubernetes.io/projected/3f8a7212-44ab-42f1-86be-8b79726fe4f8-kube-api-access-ft289\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925085 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-combined-ca-bundle\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925124 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-scripts\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-public-tls-certs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925173 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-combined-ca-bundle\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925200 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-config-data\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925222 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-fernet-keys\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ddk\" (UniqueName: \"kubernetes.io/projected/d85cc53a-0132-4491-82a1-056badced30c-kube-api-access-57ddk\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925262 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-config-data\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-scripts\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8a7212-44ab-42f1-86be-8b79726fe4f8-logs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.925346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-internal-tls-certs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.933713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-combined-ca-bundle\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.934152 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8a7212-44ab-42f1-86be-8b79726fe4f8-logs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.934495 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-internal-tls-certs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.936980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-public-tls-certs\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.937525 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-fernet-keys\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.939794 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-config-data\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.941577 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-scripts\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.942249 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-internal-tls-certs\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.943843 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-credential-keys\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.943915 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-scripts\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.944282 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-combined-ca-bundle\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.944297 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-public-tls-certs\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.945036 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-config-data\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.956511 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ddk\" (UniqueName: \"kubernetes.io/projected/d85cc53a-0132-4491-82a1-056badced30c-kube-api-access-57ddk\") pod \"keystone-76b5d4b8cb-wb8cc\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.957029 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft289\" (UniqueName: \"kubernetes.io/projected/3f8a7212-44ab-42f1-86be-8b79726fe4f8-kube-api-access-ft289\") pod \"placement-69b7546858-x7nbv\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.990389 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:22 crc kubenswrapper[4787]: I0126 18:03:22.993994 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.007040 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.128845 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-config\") pod \"1769639e-ba6d-4725-9012-91e6965c4cc0\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.129028 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-combined-ca-bundle\") pod \"1769639e-ba6d-4725-9012-91e6965c4cc0\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.129142 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tdrf\" (UniqueName: \"kubernetes.io/projected/1769639e-ba6d-4725-9012-91e6965c4cc0-kube-api-access-6tdrf\") pod \"1769639e-ba6d-4725-9012-91e6965c4cc0\" (UID: \"1769639e-ba6d-4725-9012-91e6965c4cc0\") " Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.137251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1769639e-ba6d-4725-9012-91e6965c4cc0-kube-api-access-6tdrf" (OuterVolumeSpecName: "kube-api-access-6tdrf") pod "1769639e-ba6d-4725-9012-91e6965c4cc0" (UID: "1769639e-ba6d-4725-9012-91e6965c4cc0"). InnerVolumeSpecName "kube-api-access-6tdrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.168464 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-config" (OuterVolumeSpecName: "config") pod "1769639e-ba6d-4725-9012-91e6965c4cc0" (UID: "1769639e-ba6d-4725-9012-91e6965c4cc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.202533 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1769639e-ba6d-4725-9012-91e6965c4cc0" (UID: "1769639e-ba6d-4725-9012-91e6965c4cc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.235058 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.235095 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tdrf\" (UniqueName: \"kubernetes.io/projected/1769639e-ba6d-4725-9012-91e6965c4cc0-kube-api-access-6tdrf\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.235108 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1769639e-ba6d-4725-9012-91e6965c4cc0-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.440327 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.479399 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjhv2" event={"ID":"1769639e-ba6d-4725-9012-91e6965c4cc0","Type":"ContainerDied","Data":"2518765dd68036f7df21513699c36d87c235d1492281e826ea216799d8581b9e"} Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.479688 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2518765dd68036f7df21513699c36d87c235d1492281e826ea216799d8581b9e" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.479456 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjhv2" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.482923 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-log" containerID="cri-o://a048355bf8102c3348b3554634ce1055f390edb3950b2502867a3c11ec4cadd4" gracePeriod=30 Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.483101 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0385f16-3fcd-47b1-8018-96960e1193bf","Type":"ContainerStarted","Data":"9154f07288325ba452e235462cf30b386b08cc2603c4a7d914a0469f5e3d611c"} Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.483456 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-httpd" containerID="cri-o://21a019db0bb17ee4a9909aebc04496e599180df3c9435faa99f094e7c6c9bf2e" gracePeriod=30 Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.510510 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.510490461 podStartE2EDuration="7.510490461s" podCreationTimestamp="2026-01-26 18:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:23.503727257 +0000 UTC m=+1172.210863390" watchObservedRunningTime="2026-01-26 18:03:23.510490461 +0000 UTC m=+1172.217626594" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.569705 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76b5d4b8cb-wb8cc"] Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.602665 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f372f344-175c-40c2-aab5-ad30990e9b16" path="/var/lib/kubelet/pods/f372f344-175c-40c2-aab5-ad30990e9b16/volumes" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.607227 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69b7546858-x7nbv"] Jan 26 18:03:23 crc kubenswrapper[4787]: W0126 18:03:23.620390 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f8a7212_44ab_42f1_86be_8b79726fe4f8.slice/crio-80d5fd9a44e0d66e217601733f94e5a55f13c2b2e82b16d5445ad717f3c7df9b WatchSource:0}: Error finding container 80d5fd9a44e0d66e217601733f94e5a55f13c2b2e82b16d5445ad717f3c7df9b: Status 404 returned error can't find the container with id 80d5fd9a44e0d66e217601733f94e5a55f13c2b2e82b16d5445ad717f3c7df9b Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.840241 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-p22wr"] Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.840778 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerName="dnsmasq-dns" containerID="cri-o://a8ad3c863e44c6cb36d5f2929056302a930b9a9f30cd9e81a85a5cdd8fa99a03" gracePeriod=10 Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.848486 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.887224 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-vrnz8"] Jan 26 18:03:23 crc kubenswrapper[4787]: E0126 18:03:23.887628 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769639e-ba6d-4725-9012-91e6965c4cc0" containerName="neutron-db-sync" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.887640 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769639e-ba6d-4725-9012-91e6965c4cc0" containerName="neutron-db-sync" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.887815 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769639e-ba6d-4725-9012-91e6965c4cc0" containerName="neutron-db-sync" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.888719 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.902245 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vrnz8"] Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.942136 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-559dbc6bdd-zjdgc"] Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.943994 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.945545 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pcnlj" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.947793 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.949027 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 18:03:23 crc kubenswrapper[4787]: I0126 18:03:23.949867 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.008671 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-559dbc6bdd-zjdgc"] Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064312 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064382 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-httpd-config\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064445 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfqr\" (UniqueName: \"kubernetes.io/projected/ddde001c-cec7-43fc-8f4d-0d7153ec3911-kube-api-access-cnfqr\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064515 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7tz\" (UniqueName: \"kubernetes.io/projected/678441af-ae6d-4142-91fc-450e171e9a35-kube-api-access-cj7tz\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064541 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-combined-ca-bundle\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-config\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064631 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064656 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-svc\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064682 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-ovndb-tls-certs\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.064710 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-config\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-svc\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166651 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-ovndb-tls-certs\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166679 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-config\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166731 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-httpd-config\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166793 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnfqr\" (UniqueName: \"kubernetes.io/projected/ddde001c-cec7-43fc-8f4d-0d7153ec3911-kube-api-access-cnfqr\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166823 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj7tz\" (UniqueName: \"kubernetes.io/projected/678441af-ae6d-4142-91fc-450e171e9a35-kube-api-access-cj7tz\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166861 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-combined-ca-bundle\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166880 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-config\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.166920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.167519 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-svc\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.168304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.168365 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-config\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.168563 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.168673 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.170423 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-ovndb-tls-certs\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.171727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-httpd-config\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.171934 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-combined-ca-bundle\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.181254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-config\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.186003 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj7tz\" (UniqueName: \"kubernetes.io/projected/678441af-ae6d-4142-91fc-450e171e9a35-kube-api-access-cj7tz\") pod \"dnsmasq-dns-685444497c-vrnz8\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.194054 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnfqr\" (UniqueName: \"kubernetes.io/projected/ddde001c-cec7-43fc-8f4d-0d7153ec3911-kube-api-access-cnfqr\") pod \"neutron-559dbc6bdd-zjdgc\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.359496 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.366070 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.501984 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b5d4b8cb-wb8cc" event={"ID":"d85cc53a-0132-4491-82a1-056badced30c","Type":"ContainerStarted","Data":"0b9197c0a132a15fde8443730e1e6afbd7356943bd0400304fb06f44082983a1"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.502026 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b5d4b8cb-wb8cc" event={"ID":"d85cc53a-0132-4491-82a1-056badced30c","Type":"ContainerStarted","Data":"a03c80c28bb651ac333fadc054b6010121b162d5d635ef30bcd40434e7f77eb0"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.502555 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.526175 4787 generic.go:334] "Generic (PLEG): container finished" podID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerID="a8ad3c863e44c6cb36d5f2929056302a930b9a9f30cd9e81a85a5cdd8fa99a03" exitCode=0 Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.526218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" event={"ID":"bd1a3dca-b7f8-40d0-9608-e2a45af9348b","Type":"ContainerDied","Data":"a8ad3c863e44c6cb36d5f2929056302a930b9a9f30cd9e81a85a5cdd8fa99a03"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.526280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" event={"ID":"bd1a3dca-b7f8-40d0-9608-e2a45af9348b","Type":"ContainerDied","Data":"2d4666c78c7e15e9125a648a42333c4ffe8eb87c89a137086e8a55c01a2227ef"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.526296 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d4666c78c7e15e9125a648a42333c4ffe8eb87c89a137086e8a55c01a2227ef" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.535592 4787 generic.go:334] "Generic (PLEG): container finished" podID="76870c9f-aa42-4f20-9576-a7afce840af0" containerID="21a019db0bb17ee4a9909aebc04496e599180df3c9435faa99f094e7c6c9bf2e" exitCode=0 Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.536078 4787 generic.go:334] "Generic (PLEG): container finished" podID="76870c9f-aa42-4f20-9576-a7afce840af0" containerID="a048355bf8102c3348b3554634ce1055f390edb3950b2502867a3c11ec4cadd4" exitCode=143 Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.535685 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76870c9f-aa42-4f20-9576-a7afce840af0","Type":"ContainerDied","Data":"21a019db0bb17ee4a9909aebc04496e599180df3c9435faa99f094e7c6c9bf2e"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.536148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76870c9f-aa42-4f20-9576-a7afce840af0","Type":"ContainerDied","Data":"a048355bf8102c3348b3554634ce1055f390edb3950b2502867a3c11ec4cadd4"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.543236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0385f16-3fcd-47b1-8018-96960e1193bf","Type":"ContainerStarted","Data":"97cea5b5e73344eabfcedfb18888bbb658258dfb3869f5a9c449d9dc8c32ab52"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.543251 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76b5d4b8cb-wb8cc" podStartSLOduration=2.543230649 podStartE2EDuration="2.543230649s" podCreationTimestamp="2026-01-26 18:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:24.526375877 +0000 UTC m=+1173.233512010" watchObservedRunningTime="2026-01-26 18:03:24.543230649 +0000 UTC m=+1173.250366782" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.545442 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b7546858-x7nbv" event={"ID":"3f8a7212-44ab-42f1-86be-8b79726fe4f8","Type":"ContainerStarted","Data":"c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.545485 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b7546858-x7nbv" event={"ID":"3f8a7212-44ab-42f1-86be-8b79726fe4f8","Type":"ContainerStarted","Data":"d672f1c0ba676ceab5a813f55187b4369018d13a2c0eb61a432e5bca929c29df"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.545499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b7546858-x7nbv" event={"ID":"3f8a7212-44ab-42f1-86be-8b79726fe4f8","Type":"ContainerStarted","Data":"80d5fd9a44e0d66e217601733f94e5a55f13c2b2e82b16d5445ad717f3c7df9b"} Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.545653 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.660190 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.671244 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69b7546858-x7nbv" podStartSLOduration=2.671220928 podStartE2EDuration="2.671220928s" podCreationTimestamp="2026-01-26 18:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:24.646700639 +0000 UTC m=+1173.353836772" watchObservedRunningTime="2026-01-26 18:03:24.671220928 +0000 UTC m=+1173.378357061" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.790533 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-config\") pod \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.791088 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-nb\") pod \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.791154 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-swift-storage-0\") pod \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.791239 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-sb\") pod \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.791300 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-svc\") pod \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.791384 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ncv\" (UniqueName: \"kubernetes.io/projected/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-kube-api-access-h4ncv\") pod \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\" (UID: \"bd1a3dca-b7f8-40d0-9608-e2a45af9348b\") " Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.815241 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-kube-api-access-h4ncv" (OuterVolumeSpecName: "kube-api-access-h4ncv") pod "bd1a3dca-b7f8-40d0-9608-e2a45af9348b" (UID: "bd1a3dca-b7f8-40d0-9608-e2a45af9348b"). InnerVolumeSpecName "kube-api-access-h4ncv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.851378 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd1a3dca-b7f8-40d0-9608-e2a45af9348b" (UID: "bd1a3dca-b7f8-40d0-9608-e2a45af9348b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.862360 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd1a3dca-b7f8-40d0-9608-e2a45af9348b" (UID: "bd1a3dca-b7f8-40d0-9608-e2a45af9348b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.862725 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd1a3dca-b7f8-40d0-9608-e2a45af9348b" (UID: "bd1a3dca-b7f8-40d0-9608-e2a45af9348b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.873887 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd1a3dca-b7f8-40d0-9608-e2a45af9348b" (UID: "bd1a3dca-b7f8-40d0-9608-e2a45af9348b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.877358 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-config" (OuterVolumeSpecName: "config") pod "bd1a3dca-b7f8-40d0-9608-e2a45af9348b" (UID: "bd1a3dca-b7f8-40d0-9608-e2a45af9348b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.894036 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.894078 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.894092 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ncv\" (UniqueName: \"kubernetes.io/projected/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-kube-api-access-h4ncv\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.894108 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.894119 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:24 crc kubenswrapper[4787]: I0126 18:03:24.894130 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd1a3dca-b7f8-40d0-9608-e2a45af9348b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.028142 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vrnz8"] Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.131959 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198011 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-config-data\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198174 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-combined-ca-bundle\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198243 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn6lf\" (UniqueName: \"kubernetes.io/projected/76870c9f-aa42-4f20-9576-a7afce840af0-kube-api-access-zn6lf\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198295 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-scripts\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198348 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-logs\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198369 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.198459 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-httpd-run\") pod \"76870c9f-aa42-4f20-9576-a7afce840af0\" (UID: \"76870c9f-aa42-4f20-9576-a7afce840af0\") " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.199255 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.199439 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-logs" (OuterVolumeSpecName: "logs") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.202963 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-scripts" (OuterVolumeSpecName: "scripts") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.203051 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.204995 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76870c9f-aa42-4f20-9576-a7afce840af0-kube-api-access-zn6lf" (OuterVolumeSpecName: "kube-api-access-zn6lf") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "kube-api-access-zn6lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.248093 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.267435 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-config-data" (OuterVolumeSpecName: "config-data") pod "76870c9f-aa42-4f20-9576-a7afce840af0" (UID: "76870c9f-aa42-4f20-9576-a7afce840af0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301808 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301855 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301875 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn6lf\" (UniqueName: \"kubernetes.io/projected/76870c9f-aa42-4f20-9576-a7afce840af0-kube-api-access-zn6lf\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301886 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76870c9f-aa42-4f20-9576-a7afce840af0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301898 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301963 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.301977 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76870c9f-aa42-4f20-9576-a7afce840af0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.324509 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.354256 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-559dbc6bdd-zjdgc"] Jan 26 18:03:25 crc kubenswrapper[4787]: W0126 18:03:25.364537 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddde001c_cec7_43fc_8f4d_0d7153ec3911.slice/crio-c54eb01fb3b68d96a1bd9263020efc0f03d6ed5618c6ac1cdc598e91145e7ff2 WatchSource:0}: Error finding container c54eb01fb3b68d96a1bd9263020efc0f03d6ed5618c6ac1cdc598e91145e7ff2: Status 404 returned error can't find the container with id c54eb01fb3b68d96a1bd9263020efc0f03d6ed5618c6ac1cdc598e91145e7ff2 Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.403551 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.557360 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0385f16-3fcd-47b1-8018-96960e1193bf","Type":"ContainerStarted","Data":"8dfdbb53113f5cb34a10e2774c62b0304b580bd94401ed6e118d056c7f12c098"} Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.565532 4787 generic.go:334] "Generic (PLEG): container finished" podID="678441af-ae6d-4142-91fc-450e171e9a35" containerID="ad807bc1dba590992256b350b9c28d09fe142075b4e540afbb3b4942797ff995" exitCode=0 Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.565604 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vrnz8" event={"ID":"678441af-ae6d-4142-91fc-450e171e9a35","Type":"ContainerDied","Data":"ad807bc1dba590992256b350b9c28d09fe142075b4e540afbb3b4942797ff995"} Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.565655 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vrnz8" event={"ID":"678441af-ae6d-4142-91fc-450e171e9a35","Type":"ContainerStarted","Data":"edd68a91662af87e7b9f68953f5c629c3b2aa0991cd0c81207d977230459e21b"} Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.593576 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.593553676 podStartE2EDuration="3.593553676s" podCreationTimestamp="2026-01-26 18:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:25.578303793 +0000 UTC m=+1174.285439926" watchObservedRunningTime="2026-01-26 18:03:25.593553676 +0000 UTC m=+1174.300689809" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.600341 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.619688 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-p22wr" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.624558 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76870c9f-aa42-4f20-9576-a7afce840af0","Type":"ContainerDied","Data":"6e0bf9795ac10e66f38eabb491f7ef1e94a985286353812779dc87c3ed69009c"} Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.624608 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.624620 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559dbc6bdd-zjdgc" event={"ID":"ddde001c-cec7-43fc-8f4d-0d7153ec3911","Type":"ContainerStarted","Data":"c54eb01fb3b68d96a1bd9263020efc0f03d6ed5618c6ac1cdc598e91145e7ff2"} Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.628007 4787 scope.go:117] "RemoveContainer" containerID="21a019db0bb17ee4a9909aebc04496e599180df3c9435faa99f094e7c6c9bf2e" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.685817 4787 scope.go:117] "RemoveContainer" containerID="a048355bf8102c3348b3554634ce1055f390edb3950b2502867a3c11ec4cadd4" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.697028 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-p22wr"] Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.708384 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-p22wr"] Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.720808 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.736063 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748155 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:25 crc kubenswrapper[4787]: E0126 18:03:25.748612 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-log" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748636 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-log" Jan 26 18:03:25 crc kubenswrapper[4787]: E0126 18:03:25.748668 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerName="dnsmasq-dns" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748678 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerName="dnsmasq-dns" Jan 26 18:03:25 crc kubenswrapper[4787]: E0126 18:03:25.748689 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerName="init" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748696 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerName="init" Jan 26 18:03:25 crc kubenswrapper[4787]: E0126 18:03:25.748708 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-httpd" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748716 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-httpd" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748916 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" containerName="dnsmasq-dns" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748960 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-log" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.748974 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" containerName="glance-httpd" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.750055 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.752185 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.752508 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.779553 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821447 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-logs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821497 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npggs\" (UniqueName: \"kubernetes.io/projected/a8f5f31f-b089-4fff-a501-700527b53ae7-kube-api-access-npggs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821530 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821550 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821606 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.821760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924261 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-logs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npggs\" (UniqueName: \"kubernetes.io/projected/a8f5f31f-b089-4fff-a501-700527b53ae7-kube-api-access-npggs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924423 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924447 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.924491 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.925469 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-logs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.925766 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.926644 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.929088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.929145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.930102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.938031 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.946799 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npggs\" (UniqueName: \"kubernetes.io/projected/a8f5f31f-b089-4fff-a501-700527b53ae7-kube-api-access-npggs\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:25 crc kubenswrapper[4787]: I0126 18:03:25.974769 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.088354 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.473568 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86f4885877-fz869"] Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.475215 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.477872 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.480385 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.483887 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86f4885877-fz869"] Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538350 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-ovndb-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538446 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hgsm\" (UniqueName: \"kubernetes.io/projected/08858ab3-fd32-43dd-8002-bb2b01216237-kube-api-access-2hgsm\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538469 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-config\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538486 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-internal-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538551 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-public-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538567 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-httpd-config\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.538589 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-combined-ca-bundle\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.613134 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:03:26 crc kubenswrapper[4787]: W0126 18:03:26.627524 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8f5f31f_b089_4fff_a501_700527b53ae7.slice/crio-71ecb1933eff2eb34de4275c87bbef96af8952f89a9dc465100a97c9758c3e19 WatchSource:0}: Error finding container 71ecb1933eff2eb34de4275c87bbef96af8952f89a9dc465100a97c9758c3e19: Status 404 returned error can't find the container with id 71ecb1933eff2eb34de4275c87bbef96af8952f89a9dc465100a97c9758c3e19 Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.639845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hgsm\" (UniqueName: \"kubernetes.io/projected/08858ab3-fd32-43dd-8002-bb2b01216237-kube-api-access-2hgsm\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.641188 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-config\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.642916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-internal-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.643263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-public-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.643286 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-httpd-config\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.643336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-combined-ca-bundle\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.643529 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-ovndb-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.649782 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-internal-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.655562 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-ovndb-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.656899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-config\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.660007 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-httpd-config\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.663183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-combined-ca-bundle\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.663629 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-public-tls-certs\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.664006 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hgsm\" (UniqueName: \"kubernetes.io/projected/08858ab3-fd32-43dd-8002-bb2b01216237-kube-api-access-2hgsm\") pod \"neutron-86f4885877-fz869\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:26 crc kubenswrapper[4787]: I0126 18:03:26.799245 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.317554 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86f4885877-fz869"] Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.609439 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76870c9f-aa42-4f20-9576-a7afce840af0" path="/var/lib/kubelet/pods/76870c9f-aa42-4f20-9576-a7afce840af0/volumes" Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.617588 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1a3dca-b7f8-40d0-9608-e2a45af9348b" path="/var/lib/kubelet/pods/bd1a3dca-b7f8-40d0-9608-e2a45af9348b/volumes" Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.649690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a8f5f31f-b089-4fff-a501-700527b53ae7","Type":"ContainerStarted","Data":"0f1cd29082fb000604fcb6b4fffed07a1c3618edde8d45607d7bd88655d15cab"} Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.649737 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a8f5f31f-b089-4fff-a501-700527b53ae7","Type":"ContainerStarted","Data":"71ecb1933eff2eb34de4275c87bbef96af8952f89a9dc465100a97c9758c3e19"} Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.654359 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vrnz8" event={"ID":"678441af-ae6d-4142-91fc-450e171e9a35","Type":"ContainerStarted","Data":"da352534067ba348e8ef838405f70e84e43e680315572845d971b6836eea7902"} Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.654665 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.657927 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86f4885877-fz869" event={"ID":"08858ab3-fd32-43dd-8002-bb2b01216237","Type":"ContainerStarted","Data":"53c6644643e6ea263073aaceabc26647bf6368f984feae1bf3e180a85706ae15"} Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.661908 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559dbc6bdd-zjdgc" event={"ID":"ddde001c-cec7-43fc-8f4d-0d7153ec3911","Type":"ContainerStarted","Data":"acabc500671c7c41b523b361e65d9650715f61a36ff9a4f674f0f97eb8c49075"} Jan 26 18:03:27 crc kubenswrapper[4787]: I0126 18:03:27.707820 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-vrnz8" podStartSLOduration=4.707797052 podStartE2EDuration="4.707797052s" podCreationTimestamp="2026-01-26 18:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:27.683243452 +0000 UTC m=+1176.390379595" watchObservedRunningTime="2026-01-26 18:03:27.707797052 +0000 UTC m=+1176.414933195" Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.672131 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559dbc6bdd-zjdgc" event={"ID":"ddde001c-cec7-43fc-8f4d-0d7153ec3911","Type":"ContainerStarted","Data":"9948943997fcb0808d6aebe4e568012904e9afb2462c57558f70d6dc789d4415"} Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.673403 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.676278 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a8f5f31f-b089-4fff-a501-700527b53ae7","Type":"ContainerStarted","Data":"ef6ddf453382e486a5a257bd406f89a0335e45350885d60d9c024a0b7b2ded47"} Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.687274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gwmwp" event={"ID":"cb6adff5-a812-45ee-a03d-89db150f295f","Type":"ContainerStarted","Data":"754c70da01cbbd2f3c84438f9bcdd85e6b3538dd0dfbc5892db769504466451b"} Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.704535 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-559dbc6bdd-zjdgc" podStartSLOduration=5.704518818 podStartE2EDuration="5.704518818s" podCreationTimestamp="2026-01-26 18:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:28.690508346 +0000 UTC m=+1177.397644479" watchObservedRunningTime="2026-01-26 18:03:28.704518818 +0000 UTC m=+1177.411654951" Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.714484 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-gwmwp" podStartSLOduration=3.070354396 podStartE2EDuration="36.714466711s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="2026-01-26 18:02:53.679350942 +0000 UTC m=+1142.386487075" lastFinishedPulling="2026-01-26 18:03:27.323463257 +0000 UTC m=+1176.030599390" observedRunningTime="2026-01-26 18:03:28.710532005 +0000 UTC m=+1177.417668138" watchObservedRunningTime="2026-01-26 18:03:28.714466711 +0000 UTC m=+1177.421602834" Jan 26 18:03:28 crc kubenswrapper[4787]: I0126 18:03:28.735140 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.735121537 podStartE2EDuration="3.735121537s" podCreationTimestamp="2026-01-26 18:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:28.731240962 +0000 UTC m=+1177.438377095" watchObservedRunningTime="2026-01-26 18:03:28.735121537 +0000 UTC m=+1177.442257670" Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.741454 4787 generic.go:334] "Generic (PLEG): container finished" podID="cb6adff5-a812-45ee-a03d-89db150f295f" containerID="754c70da01cbbd2f3c84438f9bcdd85e6b3538dd0dfbc5892db769504466451b" exitCode=0 Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.741667 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gwmwp" event={"ID":"cb6adff5-a812-45ee-a03d-89db150f295f","Type":"ContainerDied","Data":"754c70da01cbbd2f3c84438f9bcdd85e6b3538dd0dfbc5892db769504466451b"} Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.746206 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86f4885877-fz869" event={"ID":"08858ab3-fd32-43dd-8002-bb2b01216237","Type":"ContainerStarted","Data":"3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3"} Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.837386 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.837439 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.890563 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:03:32 crc kubenswrapper[4787]: I0126 18:03:32.908019 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.755035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerStarted","Data":"d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944"} Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.756316 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.755339 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="proxy-httpd" containerID="cri-o://d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944" gracePeriod=30 Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.756476 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86f4885877-fz869" event={"ID":"08858ab3-fd32-43dd-8002-bb2b01216237","Type":"ContainerStarted","Data":"8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd"} Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.755371 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="sg-core" containerID="cri-o://5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5" gracePeriod=30 Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.755161 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-central-agent" containerID="cri-o://85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c" gracePeriod=30 Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.755346 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-notification-agent" containerID="cri-o://a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9" gracePeriod=30 Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.756840 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.758676 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pjrz5" event={"ID":"d251ab02-33c7-41d6-806e-3a80f332c86f","Type":"ContainerStarted","Data":"2293ff8d95ba43bc1f85d8da8a1ae8475ca251485cffe8a028386e3c92834862"} Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.760084 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.760113 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.819577 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pjrz5" podStartSLOduration=2.785493982 podStartE2EDuration="41.819557004s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="2026-01-26 18:02:53.634199847 +0000 UTC m=+1142.341335980" lastFinishedPulling="2026-01-26 18:03:32.668262869 +0000 UTC m=+1181.375399002" observedRunningTime="2026-01-26 18:03:33.810179685 +0000 UTC m=+1182.517315838" watchObservedRunningTime="2026-01-26 18:03:33.819557004 +0000 UTC m=+1182.526693147" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.824495 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.921770004 podStartE2EDuration="41.824477884s" podCreationTimestamp="2026-01-26 18:02:52 +0000 UTC" firstStartedPulling="2026-01-26 18:02:53.773200775 +0000 UTC m=+1142.480336908" lastFinishedPulling="2026-01-26 18:03:32.675908655 +0000 UTC m=+1181.383044788" observedRunningTime="2026-01-26 18:03:33.77847807 +0000 UTC m=+1182.485614203" watchObservedRunningTime="2026-01-26 18:03:33.824477884 +0000 UTC m=+1182.531614017" Jan 26 18:03:33 crc kubenswrapper[4787]: I0126 18:03:33.836664 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86f4885877-fz869" podStartSLOduration=7.836645221 podStartE2EDuration="7.836645221s" podCreationTimestamp="2026-01-26 18:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:33.827809975 +0000 UTC m=+1182.534946108" watchObservedRunningTime="2026-01-26 18:03:33.836645221 +0000 UTC m=+1182.543781364" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.170396 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.293806 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-combined-ca-bundle\") pod \"cb6adff5-a812-45ee-a03d-89db150f295f\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.293944 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-db-sync-config-data\") pod \"cb6adff5-a812-45ee-a03d-89db150f295f\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.294094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mbcj\" (UniqueName: \"kubernetes.io/projected/cb6adff5-a812-45ee-a03d-89db150f295f-kube-api-access-7mbcj\") pod \"cb6adff5-a812-45ee-a03d-89db150f295f\" (UID: \"cb6adff5-a812-45ee-a03d-89db150f295f\") " Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.308125 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6adff5-a812-45ee-a03d-89db150f295f-kube-api-access-7mbcj" (OuterVolumeSpecName: "kube-api-access-7mbcj") pod "cb6adff5-a812-45ee-a03d-89db150f295f" (UID: "cb6adff5-a812-45ee-a03d-89db150f295f"). InnerVolumeSpecName "kube-api-access-7mbcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.308157 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cb6adff5-a812-45ee-a03d-89db150f295f" (UID: "cb6adff5-a812-45ee-a03d-89db150f295f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.340106 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb6adff5-a812-45ee-a03d-89db150f295f" (UID: "cb6adff5-a812-45ee-a03d-89db150f295f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.362112 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.403538 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.403584 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cb6adff5-a812-45ee-a03d-89db150f295f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.403597 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mbcj\" (UniqueName: \"kubernetes.io/projected/cb6adff5-a812-45ee-a03d-89db150f295f-kube-api-access-7mbcj\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.443680 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h"] Jan 26 18:03:34 crc kubenswrapper[4787]: I0126 18:03:34.444196 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="dnsmasq-dns" containerID="cri-o://7e077c3313483c663cd7c742a40a4f8cad42f7c629fac4120493b64571603b77" gracePeriod=10 Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:34.774162 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gwmwp" event={"ID":"cb6adff5-a812-45ee-a03d-89db150f295f","Type":"ContainerDied","Data":"b2b89cca3a8a82602c6ca257a7ad920d61e744c787fec1fc0d042ffbf4633f64"} Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:34.774217 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2b89cca3a8a82602c6ca257a7ad920d61e744c787fec1fc0d042ffbf4633f64" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:34.774731 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gwmwp" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.037896 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs"] Jan 26 18:03:37 crc kubenswrapper[4787]: E0126 18:03:35.038516 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6adff5-a812-45ee-a03d-89db150f295f" containerName="barbican-db-sync" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.038533 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6adff5-a812-45ee-a03d-89db150f295f" containerName="barbican-db-sync" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.038820 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6adff5-a812-45ee-a03d-89db150f295f" containerName="barbican-db-sync" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.040171 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.043788 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b64b47c9-scfhg"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.045112 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.053684 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.054356 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5rhf4" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.063609 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.066049 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.070734 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.075582 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b64b47c9-scfhg"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.137629 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q4r5l"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.138894 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.150918 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q4r5l"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.216669 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45x6\" (UniqueName: \"kubernetes.io/projected/1e181982-6b54-4a6f-a52c-eb025b767fb0-kube-api-access-v45x6\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.216722 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.216769 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e181982-6b54-4a6f-a52c-eb025b767fb0-logs\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.216822 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c27796ea-3db5-42ad-8b22-e4d774e28578-logs\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.216893 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpks\" (UniqueName: \"kubernetes.io/projected/6aecb16f-df4d-4138-8e65-b177e30c0530-kube-api-access-vvpks\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217172 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-config\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wx2\" (UniqueName: \"kubernetes.io/projected/c27796ea-3db5-42ad-8b22-e4d774e28578-kube-api-access-g5wx2\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217264 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-combined-ca-bundle\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217310 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data-custom\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217395 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-combined-ca-bundle\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217427 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217445 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data-custom\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.217462 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.248685 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c5bf56f6d-9jlnz"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.250742 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.257097 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.268349 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c5bf56f6d-9jlnz"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318602 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318648 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318670 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-config\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318703 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wx2\" (UniqueName: \"kubernetes.io/projected/c27796ea-3db5-42ad-8b22-e4d774e28578-kube-api-access-g5wx2\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318719 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-combined-ca-bundle\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318751 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318775 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data-custom\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318800 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-combined-ca-bundle\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318822 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data-custom\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318856 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45x6\" (UniqueName: \"kubernetes.io/projected/1e181982-6b54-4a6f-a52c-eb025b767fb0-kube-api-access-v45x6\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e181982-6b54-4a6f-a52c-eb025b767fb0-logs\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318924 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c27796ea-3db5-42ad-8b22-e4d774e28578-logs\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.318996 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpks\" (UniqueName: \"kubernetes.io/projected/6aecb16f-df4d-4138-8e65-b177e30c0530-kube-api-access-vvpks\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.319491 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e181982-6b54-4a6f-a52c-eb025b767fb0-logs\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.319805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c27796ea-3db5-42ad-8b22-e4d774e28578-logs\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.321286 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.321451 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.321486 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.321532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.322503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-config\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.323109 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-combined-ca-bundle\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.323506 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data-custom\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.324271 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data-custom\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.324386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-combined-ca-bundle\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.329682 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.341022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.342483 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45x6\" (UniqueName: \"kubernetes.io/projected/1e181982-6b54-4a6f-a52c-eb025b767fb0-kube-api-access-v45x6\") pod \"barbican-worker-6b64b47c9-scfhg\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.343145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpks\" (UniqueName: \"kubernetes.io/projected/6aecb16f-df4d-4138-8e65-b177e30c0530-kube-api-access-vvpks\") pod \"dnsmasq-dns-66cdd4b5b5-q4r5l\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.345887 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wx2\" (UniqueName: \"kubernetes.io/projected/c27796ea-3db5-42ad-8b22-e4d774e28578-kube-api-access-g5wx2\") pod \"barbican-keystone-listener-66c5b5fbf4-wjxfs\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.364393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.381120 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.421282 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.421455 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b04d42-79df-44db-93d0-c03e0e80b82b-logs\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.421511 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data-custom\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.421568 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbblc\" (UniqueName: \"kubernetes.io/projected/f2b04d42-79df-44db-93d0-c03e0e80b82b-kube-api-access-wbblc\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.421712 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-combined-ca-bundle\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.456183 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.524047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b04d42-79df-44db-93d0-c03e0e80b82b-logs\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.524395 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data-custom\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.524472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbblc\" (UniqueName: \"kubernetes.io/projected/f2b04d42-79df-44db-93d0-c03e0e80b82b-kube-api-access-wbblc\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.524537 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-combined-ca-bundle\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.524695 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b04d42-79df-44db-93d0-c03e0e80b82b-logs\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.524717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.530102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data-custom\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.530828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.531094 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-combined-ca-bundle\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.562357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbblc\" (UniqueName: \"kubernetes.io/projected/f2b04d42-79df-44db-93d0-c03e0e80b82b-kube-api-access-wbblc\") pod \"barbican-api-6c5bf56f6d-9jlnz\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:35.571765 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:36.089579 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:36.089641 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:36.122287 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:36.140289 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:36.812585 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:36.812658 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.730850 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-684775777b-sd52g"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.737442 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.756196 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.756392 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.770485 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-684775777b-sd52g"] Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-public-tls-certs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868162 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jgv\" (UniqueName: \"kubernetes.io/projected/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-kube-api-access-d6jgv\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868210 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data-custom\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868246 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-logs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-internal-tls-certs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868355 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.868376 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-combined-ca-bundle\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.874808 4787 generic.go:334] "Generic (PLEG): container finished" podID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerID="7e077c3313483c663cd7c742a40a4f8cad42f7c629fac4120493b64571603b77" exitCode=0 Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.874935 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" event={"ID":"d9ed4e15-701b-4f52-9b6d-04e44d578960","Type":"ContainerDied","Data":"7e077c3313483c663cd7c742a40a4f8cad42f7c629fac4120493b64571603b77"} Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.920318 4787 generic.go:334] "Generic (PLEG): container finished" podID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerID="d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944" exitCode=0 Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.920351 4787 generic.go:334] "Generic (PLEG): container finished" podID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerID="5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5" exitCode=2 Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.920363 4787 generic.go:334] "Generic (PLEG): container finished" podID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerID="85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c" exitCode=0 Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.921131 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerDied","Data":"d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944"} Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.921181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerDied","Data":"5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5"} Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.921194 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerDied","Data":"85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c"} Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-internal-tls-certs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970516 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-combined-ca-bundle\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970580 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-public-tls-certs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970609 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jgv\" (UniqueName: \"kubernetes.io/projected/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-kube-api-access-d6jgv\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970647 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data-custom\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.970672 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-logs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.971267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-logs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.979039 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data-custom\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.981247 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-combined-ca-bundle\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.982203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-public-tls-certs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:37 crc kubenswrapper[4787]: I0126 18:03:37.982465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:37.999636 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-internal-tls-certs\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:37.999812 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jgv\" (UniqueName: \"kubernetes.io/projected/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-kube-api-access-d6jgv\") pod \"barbican-api-684775777b-sd52g\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:38.153895 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:38.173167 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:38.469960 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs"] Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:38.942149 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-684775777b-sd52g"] Jan 26 18:03:38 crc kubenswrapper[4787]: I0126 18:03:38.964428 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" event={"ID":"c27796ea-3db5-42ad-8b22-e4d774e28578","Type":"ContainerStarted","Data":"bec22572ebbabb3f9460ce0b7fa55df8e89db2afa59233b739faf142480eacce"} Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.003036 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q4r5l"] Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.010028 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c5bf56f6d-9jlnz"] Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.020237 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b64b47c9-scfhg"] Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.172902 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.173030 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.312273 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.314505 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwdx\" (UniqueName: \"kubernetes.io/projected/d9ed4e15-701b-4f52-9b6d-04e44d578960-kube-api-access-rcwdx\") pod \"d9ed4e15-701b-4f52-9b6d-04e44d578960\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.314550 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-sb\") pod \"d9ed4e15-701b-4f52-9b6d-04e44d578960\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.314618 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-nb\") pod \"d9ed4e15-701b-4f52-9b6d-04e44d578960\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.314736 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-config\") pod \"d9ed4e15-701b-4f52-9b6d-04e44d578960\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.314766 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-swift-storage-0\") pod \"d9ed4e15-701b-4f52-9b6d-04e44d578960\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.314924 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-svc\") pod \"d9ed4e15-701b-4f52-9b6d-04e44d578960\" (UID: \"d9ed4e15-701b-4f52-9b6d-04e44d578960\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.332904 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ed4e15-701b-4f52-9b6d-04e44d578960-kube-api-access-rcwdx" (OuterVolumeSpecName: "kube-api-access-rcwdx") pod "d9ed4e15-701b-4f52-9b6d-04e44d578960" (UID: "d9ed4e15-701b-4f52-9b6d-04e44d578960"). InnerVolumeSpecName "kube-api-access-rcwdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.433456 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwdx\" (UniqueName: \"kubernetes.io/projected/d9ed4e15-701b-4f52-9b6d-04e44d578960-kube-api-access-rcwdx\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.439280 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9ed4e15-701b-4f52-9b6d-04e44d578960" (UID: "d9ed4e15-701b-4f52-9b6d-04e44d578960"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.492616 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9ed4e15-701b-4f52-9b6d-04e44d578960" (UID: "d9ed4e15-701b-4f52-9b6d-04e44d578960"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.535444 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.535477 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.598823 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-config" (OuterVolumeSpecName: "config") pod "d9ed4e15-701b-4f52-9b6d-04e44d578960" (UID: "d9ed4e15-701b-4f52-9b6d-04e44d578960"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.599267 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9ed4e15-701b-4f52-9b6d-04e44d578960" (UID: "d9ed4e15-701b-4f52-9b6d-04e44d578960"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.619438 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9ed4e15-701b-4f52-9b6d-04e44d578960" (UID: "d9ed4e15-701b-4f52-9b6d-04e44d578960"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.639652 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.639671 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.639680 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9ed4e15-701b-4f52-9b6d-04e44d578960-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.776255 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.804863 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.845821 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-combined-ca-bundle\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.846412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-run-httpd\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.846457 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-config-data\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.846502 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-log-httpd\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.846564 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ccn\" (UniqueName: \"kubernetes.io/projected/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-kube-api-access-m2ccn\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.846680 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-sg-core-conf-yaml\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.846767 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-scripts\") pod \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\" (UID: \"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856\") " Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.852393 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.854949 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.885286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-scripts" (OuterVolumeSpecName: "scripts") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.886274 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-kube-api-access-m2ccn" (OuterVolumeSpecName: "kube-api-access-m2ccn") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "kube-api-access-m2ccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.955913 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.955949 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.955959 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ccn\" (UniqueName: \"kubernetes.io/projected/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-kube-api-access-m2ccn\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.955968 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.981509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-684775777b-sd52g" event={"ID":"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa","Type":"ContainerStarted","Data":"346eebb01a3f72fc52b37308e2a116af63d29c1e13d92d2e955b17ea8a8bd265"} Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.981555 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-684775777b-sd52g" event={"ID":"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa","Type":"ContainerStarted","Data":"62a8d7fcb7d5c41b0f8668408872d9a8c778a4ed9bdb0a6bb74728f2b02368e4"} Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.983209 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b64b47c9-scfhg" event={"ID":"1e181982-6b54-4a6f-a52c-eb025b767fb0","Type":"ContainerStarted","Data":"ae6a4af4234275e5f4e927bcd6b7e953826b39e2517187743b3408b34d7b57d3"} Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.987805 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" event={"ID":"d9ed4e15-701b-4f52-9b6d-04e44d578960","Type":"ContainerDied","Data":"092f4bd32b5ab6901db6d891b1bdab11f0e9787af9459f9eb91380c9c6e24a5d"} Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.987825 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.987842 4787 scope.go:117] "RemoveContainer" containerID="7e077c3313483c663cd7c742a40a4f8cad42f7c629fac4120493b64571603b77" Jan 26 18:03:39 crc kubenswrapper[4787]: I0126 18:03:39.989296 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.010648 4787 generic.go:334] "Generic (PLEG): container finished" podID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerID="a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9" exitCode=0 Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.010731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerDied","Data":"a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9"} Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.010758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ead8a8a4-1c4a-4a7b-95fe-2254b55ab856","Type":"ContainerDied","Data":"48ed0615e52f3293b833c42d8383405079eabd1d02b058cbfb5e19c1d61b1adc"} Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.010841 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.026810 4787 generic.go:334] "Generic (PLEG): container finished" podID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerID="e3675ce352473c3c06d6f87bba5330ec3af5c31191152b16e3a0c9743c7dbb4f" exitCode=0 Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.026880 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" event={"ID":"6aecb16f-df4d-4138-8e65-b177e30c0530","Type":"ContainerDied","Data":"e3675ce352473c3c06d6f87bba5330ec3af5c31191152b16e3a0c9743c7dbb4f"} Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.026908 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" event={"ID":"6aecb16f-df4d-4138-8e65-b177e30c0530","Type":"ContainerStarted","Data":"4cda75bdfee101800772c6c205b9624865bf5dea601370457ec1e99bb350db25"} Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.039746 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" event={"ID":"f2b04d42-79df-44db-93d0-c03e0e80b82b","Type":"ContainerStarted","Data":"e4ca64e1a76343ea3cc4fd2ad477de8be658b4143f37207870a497e65d791095"} Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.039797 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" event={"ID":"f2b04d42-79df-44db-93d0-c03e0e80b82b","Type":"ContainerStarted","Data":"1d7630adbeeca0d31ebb62447f14af3e7a2093e27c385cdef82e1070951270a5"} Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.059282 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.136270 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.149293 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.149403 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.160390 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.163990 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-config-data" (OuterVolumeSpecName: "config-data") pod "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" (UID: "ead8a8a4-1c4a-4a7b-95fe-2254b55ab856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.186564 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.265813 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.294415 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h"] Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.301455 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-2bs6h"] Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.359061 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.374340 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.394482 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:03:40 crc kubenswrapper[4787]: E0126 18:03:40.394921 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="proxy-httpd" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.394934 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="proxy-httpd" Jan 26 18:03:40 crc kubenswrapper[4787]: E0126 18:03:40.394954 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="dnsmasq-dns" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.394960 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="dnsmasq-dns" Jan 26 18:03:40 crc kubenswrapper[4787]: E0126 18:03:40.394980 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="init" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395029 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="init" Jan 26 18:03:40 crc kubenswrapper[4787]: E0126 18:03:40.395044 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-central-agent" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395052 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-central-agent" Jan 26 18:03:40 crc kubenswrapper[4787]: E0126 18:03:40.395064 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-notification-agent" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395073 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-notification-agent" Jan 26 18:03:40 crc kubenswrapper[4787]: E0126 18:03:40.395105 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="sg-core" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395112 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="sg-core" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395301 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-central-agent" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395319 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="sg-core" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395329 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" containerName="dnsmasq-dns" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395342 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="ceilometer-notification-agent" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.395350 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" containerName="proxy-httpd" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.396871 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.406618 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.406955 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.407226 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.471954 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-run-httpd\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.472016 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9n2d\" (UniqueName: \"kubernetes.io/projected/b8003b6b-1090-4019-8414-29ab3a393f0c-kube-api-access-x9n2d\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.472036 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.472076 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.472147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-config-data\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.472181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-log-httpd\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.472210 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-scripts\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573357 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-run-httpd\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573412 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9n2d\" (UniqueName: \"kubernetes.io/projected/b8003b6b-1090-4019-8414-29ab3a393f0c-kube-api-access-x9n2d\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573430 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573471 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573533 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-config-data\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573575 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-log-httpd\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573628 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-scripts\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.573996 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-run-httpd\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.574348 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-log-httpd\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.581284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-scripts\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.582129 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.585096 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-config-data\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.589811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.590720 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9n2d\" (UniqueName: \"kubernetes.io/projected/b8003b6b-1090-4019-8414-29ab3a393f0c-kube-api-access-x9n2d\") pod \"ceilometer-0\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.731581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:03:40 crc kubenswrapper[4787]: I0126 18:03:40.823577 4787 scope.go:117] "RemoveContainer" containerID="77c6b49201eea341e0899453197149b3324357d891fd059ba0d400ee1b6ebe14" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.058884 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" event={"ID":"f2b04d42-79df-44db-93d0-c03e0e80b82b","Type":"ContainerStarted","Data":"4bb48154bd80422e171eea8ec498dfcb6d8f829679360a23ea6d1d31b6313666"} Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.060070 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.060094 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.068724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-684775777b-sd52g" event={"ID":"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa","Type":"ContainerStarted","Data":"5b93e665a1925c67d4a5127df1172c0fcd71ee7863b68ee1361c8117a65ecb61"} Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.068794 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.068890 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.076113 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podStartSLOduration=6.076096503 podStartE2EDuration="6.076096503s" podCreationTimestamp="2026-01-26 18:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:41.075630391 +0000 UTC m=+1189.782766524" watchObservedRunningTime="2026-01-26 18:03:41.076096503 +0000 UTC m=+1189.783232636" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.110101 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-684775777b-sd52g" podStartSLOduration=4.110078383 podStartE2EDuration="4.110078383s" podCreationTimestamp="2026-01-26 18:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:41.098396478 +0000 UTC m=+1189.805532611" watchObservedRunningTime="2026-01-26 18:03:41.110078383 +0000 UTC m=+1189.817214516" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.303800 4787 scope.go:117] "RemoveContainer" containerID="d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.496453 4787 scope.go:117] "RemoveContainer" containerID="5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.577273 4787 scope.go:117] "RemoveContainer" containerID="a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.604131 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ed4e15-701b-4f52-9b6d-04e44d578960" path="/var/lib/kubelet/pods/d9ed4e15-701b-4f52-9b6d-04e44d578960/volumes" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.604876 4787 scope.go:117] "RemoveContainer" containerID="85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.605316 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead8a8a4-1c4a-4a7b-95fe-2254b55ab856" path="/var/lib/kubelet/pods/ead8a8a4-1c4a-4a7b-95fe-2254b55ab856/volumes" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.637406 4787 scope.go:117] "RemoveContainer" containerID="d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944" Jan 26 18:03:41 crc kubenswrapper[4787]: E0126 18:03:41.637827 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944\": container with ID starting with d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944 not found: ID does not exist" containerID="d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.637866 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944"} err="failed to get container status \"d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944\": rpc error: code = NotFound desc = could not find container \"d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944\": container with ID starting with d415355132fdbe4a880dd6f47f06531cd3a750654271eb82d23170ce8678b944 not found: ID does not exist" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.637895 4787 scope.go:117] "RemoveContainer" containerID="5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5" Jan 26 18:03:41 crc kubenswrapper[4787]: E0126 18:03:41.638319 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5\": container with ID starting with 5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5 not found: ID does not exist" containerID="5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.638349 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5"} err="failed to get container status \"5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5\": rpc error: code = NotFound desc = could not find container \"5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5\": container with ID starting with 5679139e245221a818bb4b15c0ee9cb7bb5770ddb828ea421bd72c4a7d3b6bf5 not found: ID does not exist" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.638367 4787 scope.go:117] "RemoveContainer" containerID="a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9" Jan 26 18:03:41 crc kubenswrapper[4787]: E0126 18:03:41.638602 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9\": container with ID starting with a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9 not found: ID does not exist" containerID="a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.638635 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9"} err="failed to get container status \"a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9\": rpc error: code = NotFound desc = could not find container \"a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9\": container with ID starting with a96e1cbfc23033ce9e10e05adb193b09723411f8512c3e61ee002f36821669d9 not found: ID does not exist" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.638674 4787 scope.go:117] "RemoveContainer" containerID="85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c" Jan 26 18:03:41 crc kubenswrapper[4787]: E0126 18:03:41.638859 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c\": container with ID starting with 85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c not found: ID does not exist" containerID="85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.638885 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c"} err="failed to get container status \"85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c\": rpc error: code = NotFound desc = could not find container \"85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c\": container with ID starting with 85ec7c70ddbf06aa3d5315fea475ab0e2ab735ed9a0ed068c114f6330e27ac4c not found: ID does not exist" Jan 26 18:03:41 crc kubenswrapper[4787]: I0126 18:03:41.840194 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.085421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" event={"ID":"6aecb16f-df4d-4138-8e65-b177e30c0530","Type":"ContainerStarted","Data":"f56ec7f88cdd3f372ce17fd17921db700da0b4195fe04aa2e4a92f5eca4632d1"} Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.085822 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.090019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b64b47c9-scfhg" event={"ID":"1e181982-6b54-4a6f-a52c-eb025b767fb0","Type":"ContainerStarted","Data":"96b6dc83a1412919436b7c09b8eb24e55f4c4ec160f6f0f178b289df71a6d06a"} Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.090061 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b64b47c9-scfhg" event={"ID":"1e181982-6b54-4a6f-a52c-eb025b767fb0","Type":"ContainerStarted","Data":"fe98b6784049128c409c7bb089303be8bb4a940ea14a70bee91761c87ae35d2b"} Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.091995 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerStarted","Data":"0e801534a77501c1b50347a648dcc268ed7a563769ac980239114689dc20af68"} Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.094787 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" event={"ID":"c27796ea-3db5-42ad-8b22-e4d774e28578","Type":"ContainerStarted","Data":"e8e42b27e122fd736da88e5dda90848d4ff33a360274bad85a5080141095b729"} Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.094821 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" event={"ID":"c27796ea-3db5-42ad-8b22-e4d774e28578","Type":"ContainerStarted","Data":"26320b4b7ea2c7895920e7f17135b8543b24a3e99a032d58fbe702acea73a2ec"} Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.113560 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" podStartSLOduration=7.113530214 podStartE2EDuration="7.113530214s" podCreationTimestamp="2026-01-26 18:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:42.108673985 +0000 UTC m=+1190.815810138" watchObservedRunningTime="2026-01-26 18:03:42.113530214 +0000 UTC m=+1190.820666347" Jan 26 18:03:42 crc kubenswrapper[4787]: I0126 18:03:42.139966 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b64b47c9-scfhg" podStartSLOduration=4.775126338 podStartE2EDuration="7.13994101s" podCreationTimestamp="2026-01-26 18:03:35 +0000 UTC" firstStartedPulling="2026-01-26 18:03:38.9750991 +0000 UTC m=+1187.682235233" lastFinishedPulling="2026-01-26 18:03:41.339913772 +0000 UTC m=+1190.047049905" observedRunningTime="2026-01-26 18:03:42.130145131 +0000 UTC m=+1190.837281264" watchObservedRunningTime="2026-01-26 18:03:42.13994101 +0000 UTC m=+1190.847077143" Jan 26 18:03:43 crc kubenswrapper[4787]: I0126 18:03:43.116069 4787 generic.go:334] "Generic (PLEG): container finished" podID="d251ab02-33c7-41d6-806e-3a80f332c86f" containerID="2293ff8d95ba43bc1f85d8da8a1ae8475ca251485cffe8a028386e3c92834862" exitCode=0 Jan 26 18:03:43 crc kubenswrapper[4787]: I0126 18:03:43.116400 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pjrz5" event={"ID":"d251ab02-33c7-41d6-806e-3a80f332c86f","Type":"ContainerDied","Data":"2293ff8d95ba43bc1f85d8da8a1ae8475ca251485cffe8a028386e3c92834862"} Jan 26 18:03:43 crc kubenswrapper[4787]: I0126 18:03:43.126006 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerStarted","Data":"6dc0f6877ede41acaf1a36c5fb026a85cc0f56d152af63dd0e82dd0ee31465ec"} Jan 26 18:03:43 crc kubenswrapper[4787]: I0126 18:03:43.136834 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" podStartSLOduration=5.296093084 podStartE2EDuration="8.13681391s" podCreationTimestamp="2026-01-26 18:03:35 +0000 UTC" firstStartedPulling="2026-01-26 18:03:38.499055972 +0000 UTC m=+1187.206192095" lastFinishedPulling="2026-01-26 18:03:41.339776778 +0000 UTC m=+1190.046912921" observedRunningTime="2026-01-26 18:03:42.160755379 +0000 UTC m=+1190.867891522" watchObservedRunningTime="2026-01-26 18:03:43.13681391 +0000 UTC m=+1191.843950063" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.135265 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerStarted","Data":"36970f4e2fd3be32ffcd77769bbc7ca0032cd78597279b4c25303ed3f53c92b6"} Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.501894 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.665294 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-config-data\") pod \"d251ab02-33c7-41d6-806e-3a80f332c86f\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.665432 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-scripts\") pod \"d251ab02-33c7-41d6-806e-3a80f332c86f\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.665502 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-combined-ca-bundle\") pod \"d251ab02-33c7-41d6-806e-3a80f332c86f\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.665586 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-db-sync-config-data\") pod \"d251ab02-33c7-41d6-806e-3a80f332c86f\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.665682 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d251ab02-33c7-41d6-806e-3a80f332c86f-etc-machine-id\") pod \"d251ab02-33c7-41d6-806e-3a80f332c86f\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.665761 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5vd\" (UniqueName: \"kubernetes.io/projected/d251ab02-33c7-41d6-806e-3a80f332c86f-kube-api-access-zq5vd\") pod \"d251ab02-33c7-41d6-806e-3a80f332c86f\" (UID: \"d251ab02-33c7-41d6-806e-3a80f332c86f\") " Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.667069 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d251ab02-33c7-41d6-806e-3a80f332c86f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d251ab02-33c7-41d6-806e-3a80f332c86f" (UID: "d251ab02-33c7-41d6-806e-3a80f332c86f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.671023 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d251ab02-33c7-41d6-806e-3a80f332c86f" (UID: "d251ab02-33c7-41d6-806e-3a80f332c86f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.671105 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d251ab02-33c7-41d6-806e-3a80f332c86f-kube-api-access-zq5vd" (OuterVolumeSpecName: "kube-api-access-zq5vd") pod "d251ab02-33c7-41d6-806e-3a80f332c86f" (UID: "d251ab02-33c7-41d6-806e-3a80f332c86f"). InnerVolumeSpecName "kube-api-access-zq5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.672100 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-scripts" (OuterVolumeSpecName: "scripts") pod "d251ab02-33c7-41d6-806e-3a80f332c86f" (UID: "d251ab02-33c7-41d6-806e-3a80f332c86f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.696829 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d251ab02-33c7-41d6-806e-3a80f332c86f" (UID: "d251ab02-33c7-41d6-806e-3a80f332c86f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.715675 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-config-data" (OuterVolumeSpecName: "config-data") pod "d251ab02-33c7-41d6-806e-3a80f332c86f" (UID: "d251ab02-33c7-41d6-806e-3a80f332c86f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.768289 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.768326 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.768339 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.768352 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d251ab02-33c7-41d6-806e-3a80f332c86f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.768365 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d251ab02-33c7-41d6-806e-3a80f332c86f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:44 crc kubenswrapper[4787]: I0126 18:03:44.768376 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5vd\" (UniqueName: \"kubernetes.io/projected/d251ab02-33c7-41d6-806e-3a80f332c86f-kube-api-access-zq5vd\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.150457 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pjrz5" event={"ID":"d251ab02-33c7-41d6-806e-3a80f332c86f","Type":"ContainerDied","Data":"7e3e44d91bf16ada944a598561d67a61a6d11e12a474f9251f001bfeb03a96ec"} Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.150761 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e3e44d91bf16ada944a598561d67a61a6d11e12a474f9251f001bfeb03a96ec" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.150836 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pjrz5" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.456644 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:03:45 crc kubenswrapper[4787]: E0126 18:03:45.457108 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d251ab02-33c7-41d6-806e-3a80f332c86f" containerName="cinder-db-sync" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.457131 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d251ab02-33c7-41d6-806e-3a80f332c86f" containerName="cinder-db-sync" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.457376 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d251ab02-33c7-41d6-806e-3a80f332c86f" containerName="cinder-db-sync" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.458797 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.465673 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.467999 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qtc2j" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.468193 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.468356 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.488344 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.581461 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q4r5l"] Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.581882 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerName="dnsmasq-dns" containerID="cri-o://f56ec7f88cdd3f372ce17fd17921db700da0b4195fe04aa2e4a92f5eca4632d1" gracePeriod=10 Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.592684 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x97pr\" (UniqueName: \"kubernetes.io/projected/2112d17d-e9c4-4384-86a9-a600503abd45-kube-api-access-x97pr\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.592748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.592800 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2112d17d-e9c4-4384-86a9-a600503abd45-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.592850 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.592873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-scripts\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.592889 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.651167 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-lzx9t"] Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.652537 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.663322 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-lzx9t"] Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.694883 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2112d17d-e9c4-4384-86a9-a600503abd45-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.695183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2112d17d-e9c4-4384-86a9-a600503abd45-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.695277 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.695304 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-scripts\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.696769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.696854 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x97pr\" (UniqueName: \"kubernetes.io/projected/2112d17d-e9c4-4384-86a9-a600503abd45-kube-api-access-x97pr\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.696920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.709804 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.713819 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.717768 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-scripts\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.717849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.727933 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x97pr\" (UniqueName: \"kubernetes.io/projected/2112d17d-e9c4-4384-86a9-a600503abd45-kube-api-access-x97pr\") pod \"cinder-scheduler-0\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.785380 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.803289 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.803341 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-config\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.804012 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm46h\" (UniqueName: \"kubernetes.io/projected/d3368f42-602d-4df9-a27d-403bd3ffae37-kube-api-access-jm46h\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.804164 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.804292 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.804322 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.804388 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.807820 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.824526 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.825129 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c058658e-674d-484e-9644-7ae6086c65cb-logs\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929133 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929165 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929232 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929268 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2p5\" (UniqueName: \"kubernetes.io/projected/c058658e-674d-484e-9644-7ae6086c65cb-kube-api-access-5w2p5\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929292 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c058658e-674d-484e-9644-7ae6086c65cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-config\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929383 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm46h\" (UniqueName: \"kubernetes.io/projected/d3368f42-602d-4df9-a27d-403bd3ffae37-kube-api-access-jm46h\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929406 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929426 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-scripts\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.929512 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.930197 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.930257 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.931618 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.931797 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.932268 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-config\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:45 crc kubenswrapper[4787]: I0126 18:03:45.954073 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm46h\" (UniqueName: \"kubernetes.io/projected/d3368f42-602d-4df9-a27d-403bd3ffae37-kube-api-access-jm46h\") pod \"dnsmasq-dns-75dbb546bf-lzx9t\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.031394 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.032217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-scripts\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.032279 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.032357 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c058658e-674d-484e-9644-7ae6086c65cb-logs\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.032382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.032422 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c058658e-674d-484e-9644-7ae6086c65cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.032437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w2p5\" (UniqueName: \"kubernetes.io/projected/c058658e-674d-484e-9644-7ae6086c65cb-kube-api-access-5w2p5\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.033198 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c058658e-674d-484e-9644-7ae6086c65cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.035743 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c058658e-674d-484e-9644-7ae6086c65cb-logs\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.037059 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.037420 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.038435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.044218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-scripts\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.056793 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w2p5\" (UniqueName: \"kubernetes.io/projected/c058658e-674d-484e-9644-7ae6086c65cb-kube-api-access-5w2p5\") pod \"cinder-api-0\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.135824 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.138772 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.287497 4787 generic.go:334] "Generic (PLEG): container finished" podID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerID="f56ec7f88cdd3f372ce17fd17921db700da0b4195fe04aa2e4a92f5eca4632d1" exitCode=0 Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.287580 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" event={"ID":"6aecb16f-df4d-4138-8e65-b177e30c0530","Type":"ContainerDied","Data":"f56ec7f88cdd3f372ce17fd17921db700da0b4195fe04aa2e4a92f5eca4632d1"} Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.303506 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerStarted","Data":"97457590143af98c242b616f62ef4d708443ff2708ea9818f49cfac492a58a72"} Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.369010 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.473450 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-swift-storage-0\") pod \"6aecb16f-df4d-4138-8e65-b177e30c0530\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.473744 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-nb\") pod \"6aecb16f-df4d-4138-8e65-b177e30c0530\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.473781 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-config\") pod \"6aecb16f-df4d-4138-8e65-b177e30c0530\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.473798 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-svc\") pod \"6aecb16f-df4d-4138-8e65-b177e30c0530\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.473832 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-sb\") pod \"6aecb16f-df4d-4138-8e65-b177e30c0530\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.473879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpks\" (UniqueName: \"kubernetes.io/projected/6aecb16f-df4d-4138-8e65-b177e30c0530-kube-api-access-vvpks\") pod \"6aecb16f-df4d-4138-8e65-b177e30c0530\" (UID: \"6aecb16f-df4d-4138-8e65-b177e30c0530\") " Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.487873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aecb16f-df4d-4138-8e65-b177e30c0530-kube-api-access-vvpks" (OuterVolumeSpecName: "kube-api-access-vvpks") pod "6aecb16f-df4d-4138-8e65-b177e30c0530" (UID: "6aecb16f-df4d-4138-8e65-b177e30c0530"). InnerVolumeSpecName "kube-api-access-vvpks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.574611 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6aecb16f-df4d-4138-8e65-b177e30c0530" (UID: "6aecb16f-df4d-4138-8e65-b177e30c0530"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.575666 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.575685 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpks\" (UniqueName: \"kubernetes.io/projected/6aecb16f-df4d-4138-8e65-b177e30c0530-kube-api-access-vvpks\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.593463 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6aecb16f-df4d-4138-8e65-b177e30c0530" (UID: "6aecb16f-df4d-4138-8e65-b177e30c0530"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.630521 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-config" (OuterVolumeSpecName: "config") pod "6aecb16f-df4d-4138-8e65-b177e30c0530" (UID: "6aecb16f-df4d-4138-8e65-b177e30c0530"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.632622 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6aecb16f-df4d-4138-8e65-b177e30c0530" (UID: "6aecb16f-df4d-4138-8e65-b177e30c0530"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.633139 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6aecb16f-df4d-4138-8e65-b177e30c0530" (UID: "6aecb16f-df4d-4138-8e65-b177e30c0530"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.642717 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.687204 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.687231 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.687245 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.687257 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aecb16f-df4d-4138-8e65-b177e30c0530-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.808462 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.808760 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.878940 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:46 crc kubenswrapper[4787]: W0126 18:03:46.992836 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3368f42_602d_4df9_a27d_403bd3ffae37.slice/crio-0110c0e67aa79afedc80e411a4abc87293086bdd11a1fdc59ddfcabe93a9ea74 WatchSource:0}: Error finding container 0110c0e67aa79afedc80e411a4abc87293086bdd11a1fdc59ddfcabe93a9ea74: Status 404 returned error can't find the container with id 0110c0e67aa79afedc80e411a4abc87293086bdd11a1fdc59ddfcabe93a9ea74 Jan 26 18:03:46 crc kubenswrapper[4787]: I0126 18:03:46.994217 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-lzx9t"] Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.314725 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c058658e-674d-484e-9644-7ae6086c65cb","Type":"ContainerStarted","Data":"d49d4af52873f9660e619fd93804404271720c41baf06445113f97dd59617e82"} Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.318054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2112d17d-e9c4-4384-86a9-a600503abd45","Type":"ContainerStarted","Data":"bb0f9490c0568d666fe06b1bb75fc402d2d2b970317d41ef45f5610c3f358ce6"} Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.321857 4787 generic.go:334] "Generic (PLEG): container finished" podID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerID="379f17dd89ea40d49a1bc6ceb9ea880e7c897b014636927cd337f9fec894bd0b" exitCode=0 Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.322110 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" event={"ID":"d3368f42-602d-4df9-a27d-403bd3ffae37","Type":"ContainerDied","Data":"379f17dd89ea40d49a1bc6ceb9ea880e7c897b014636927cd337f9fec894bd0b"} Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.322163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" event={"ID":"d3368f42-602d-4df9-a27d-403bd3ffae37","Type":"ContainerStarted","Data":"0110c0e67aa79afedc80e411a4abc87293086bdd11a1fdc59ddfcabe93a9ea74"} Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.339906 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerStarted","Data":"20ce24b00e4a2879b62e27bdfd63933ada0a6adfa0e1272af42f81cf5ebd50b0"} Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.340698 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.368846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" event={"ID":"6aecb16f-df4d-4138-8e65-b177e30c0530","Type":"ContainerDied","Data":"4cda75bdfee101800772c6c205b9624865bf5dea601370457ec1e99bb350db25"} Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.368895 4787 scope.go:117] "RemoveContainer" containerID="f56ec7f88cdd3f372ce17fd17921db700da0b4195fe04aa2e4a92f5eca4632d1" Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.369061 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q4r5l" Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.386197 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.546262213 podStartE2EDuration="7.386176943s" podCreationTimestamp="2026-01-26 18:03:40 +0000 UTC" firstStartedPulling="2026-01-26 18:03:41.849200252 +0000 UTC m=+1190.556336395" lastFinishedPulling="2026-01-26 18:03:46.689114992 +0000 UTC m=+1195.396251125" observedRunningTime="2026-01-26 18:03:47.369361292 +0000 UTC m=+1196.076497415" watchObservedRunningTime="2026-01-26 18:03:47.386176943 +0000 UTC m=+1196.093313066" Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.403426 4787 scope.go:117] "RemoveContainer" containerID="e3675ce352473c3c06d6f87bba5330ec3af5c31191152b16e3a0c9743c7dbb4f" Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.417036 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q4r5l"] Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.427977 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q4r5l"] Jan 26 18:03:47 crc kubenswrapper[4787]: I0126 18:03:47.614648 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" path="/var/lib/kubelet/pods/6aecb16f-df4d-4138-8e65-b177e30c0530/volumes" Jan 26 18:03:48 crc kubenswrapper[4787]: I0126 18:03:48.024418 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:48 crc kubenswrapper[4787]: I0126 18:03:48.070002 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:48 crc kubenswrapper[4787]: I0126 18:03:48.415130 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c058658e-674d-484e-9644-7ae6086c65cb","Type":"ContainerStarted","Data":"e047efe779197a87b585080d2ea585a483800deae7eba9595fd7f60f69eac941"} Jan 26 18:03:48 crc kubenswrapper[4787]: I0126 18:03:48.474018 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" event={"ID":"d3368f42-602d-4df9-a27d-403bd3ffae37","Type":"ContainerStarted","Data":"d9e9a8637c4103ef3906e62de57bd46a3f104eeda7a4bda74fefad8d9e84e2a2"} Jan 26 18:03:48 crc kubenswrapper[4787]: I0126 18:03:48.474113 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:48 crc kubenswrapper[4787]: I0126 18:03:48.503711 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" podStartSLOduration=3.503693522 podStartE2EDuration="3.503693522s" podCreationTimestamp="2026-01-26 18:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:48.50239345 +0000 UTC m=+1197.209529583" watchObservedRunningTime="2026-01-26 18:03:48.503693522 +0000 UTC m=+1197.210829655" Jan 26 18:03:49 crc kubenswrapper[4787]: I0126 18:03:49.227718 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:49 crc kubenswrapper[4787]: I0126 18:03:49.508166 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c058658e-674d-484e-9644-7ae6086c65cb","Type":"ContainerStarted","Data":"db03f06efd061ca3d64f50a5a46219dcd0c4b527d1c6fcc2c34c3c03c3905860"} Jan 26 18:03:49 crc kubenswrapper[4787]: I0126 18:03:49.509071 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 18:03:49 crc kubenswrapper[4787]: I0126 18:03:49.526288 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2112d17d-e9c4-4384-86a9-a600503abd45","Type":"ContainerStarted","Data":"c0fd9a57dbb14e6417f9d1146c62ebea0b133fb6cde099d6377e92b29d9120d8"} Jan 26 18:03:49 crc kubenswrapper[4787]: I0126 18:03:49.537911 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.537887614 podStartE2EDuration="4.537887614s" podCreationTimestamp="2026-01-26 18:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:49.534116373 +0000 UTC m=+1198.241252506" watchObservedRunningTime="2026-01-26 18:03:49.537887614 +0000 UTC m=+1198.245023747" Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.134868 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.535311 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api-log" containerID="cri-o://e047efe779197a87b585080d2ea585a483800deae7eba9595fd7f60f69eac941" gracePeriod=30 Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.536470 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2112d17d-e9c4-4384-86a9-a600503abd45","Type":"ContainerStarted","Data":"201c58bcacb82f327df09ae94b5879e5b312566dd56ef1c2ff5b3b39df63342c"} Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.536767 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api" containerID="cri-o://db03f06efd061ca3d64f50a5a46219dcd0c4b527d1c6fcc2c34c3c03c3905860" gracePeriod=30 Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.563360 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.523423071 podStartE2EDuration="5.563341914s" podCreationTimestamp="2026-01-26 18:03:45 +0000 UTC" firstStartedPulling="2026-01-26 18:03:46.686413146 +0000 UTC m=+1195.393549279" lastFinishedPulling="2026-01-26 18:03:47.726331989 +0000 UTC m=+1196.433468122" observedRunningTime="2026-01-26 18:03:50.557298806 +0000 UTC m=+1199.264434939" watchObservedRunningTime="2026-01-26 18:03:50.563341914 +0000 UTC m=+1199.270478047" Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.819609 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.825581 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.899291 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c5bf56f6d-9jlnz"] Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.899560 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api-log" containerID="cri-o://e4ca64e1a76343ea3cc4fd2ad477de8be658b4143f37207870a497e65d791095" gracePeriod=30 Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.900057 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api" containerID="cri-o://4bb48154bd80422e171eea8ec498dfcb6d8f829679360a23ea6d1d31b6313666" gracePeriod=30 Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.920010 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Jan 26 18:03:50 crc kubenswrapper[4787]: I0126 18:03:50.920704 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.564375 4787 generic.go:334] "Generic (PLEG): container finished" podID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerID="e4ca64e1a76343ea3cc4fd2ad477de8be658b4143f37207870a497e65d791095" exitCode=143 Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.564717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" event={"ID":"f2b04d42-79df-44db-93d0-c03e0e80b82b","Type":"ContainerDied","Data":"e4ca64e1a76343ea3cc4fd2ad477de8be658b4143f37207870a497e65d791095"} Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.570274 4787 generic.go:334] "Generic (PLEG): container finished" podID="c058658e-674d-484e-9644-7ae6086c65cb" containerID="db03f06efd061ca3d64f50a5a46219dcd0c4b527d1c6fcc2c34c3c03c3905860" exitCode=0 Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.570308 4787 generic.go:334] "Generic (PLEG): container finished" podID="c058658e-674d-484e-9644-7ae6086c65cb" containerID="e047efe779197a87b585080d2ea585a483800deae7eba9595fd7f60f69eac941" exitCode=143 Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.571049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c058658e-674d-484e-9644-7ae6086c65cb","Type":"ContainerDied","Data":"db03f06efd061ca3d64f50a5a46219dcd0c4b527d1c6fcc2c34c3c03c3905860"} Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.571078 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c058658e-674d-484e-9644-7ae6086c65cb","Type":"ContainerDied","Data":"e047efe779197a87b585080d2ea585a483800deae7eba9595fd7f60f69eac941"} Jan 26 18:03:51 crc kubenswrapper[4787]: E0126 18:03:51.658564 4787 info.go:109] Failed to get network devices: open /sys/class/net/d49d4af52873f96/address: no such file or directory Jan 26 18:03:51 crc kubenswrapper[4787]: I0126 18:03:51.944407 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.049885 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c058658e-674d-484e-9644-7ae6086c65cb-logs\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.050659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c058658e-674d-484e-9644-7ae6086c65cb-etc-machine-id\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.050726 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.050800 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-combined-ca-bundle\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.050845 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w2p5\" (UniqueName: \"kubernetes.io/projected/c058658e-674d-484e-9644-7ae6086c65cb-kube-api-access-5w2p5\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.050880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data-custom\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.051011 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-scripts\") pod \"c058658e-674d-484e-9644-7ae6086c65cb\" (UID: \"c058658e-674d-484e-9644-7ae6086c65cb\") " Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.051663 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c058658e-674d-484e-9644-7ae6086c65cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.052624 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c058658e-674d-484e-9644-7ae6086c65cb-logs" (OuterVolumeSpecName: "logs") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.062211 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c058658e-674d-484e-9644-7ae6086c65cb-kube-api-access-5w2p5" (OuterVolumeSpecName: "kube-api-access-5w2p5") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "kube-api-access-5w2p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.065810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.065933 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-scripts" (OuterVolumeSpecName: "scripts") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.096144 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.154773 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.155074 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c058658e-674d-484e-9644-7ae6086c65cb-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.155158 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c058658e-674d-484e-9644-7ae6086c65cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.155221 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.155282 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w2p5\" (UniqueName: \"kubernetes.io/projected/c058658e-674d-484e-9644-7ae6086c65cb-kube-api-access-5w2p5\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.155356 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.179234 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data" (OuterVolumeSpecName: "config-data") pod "c058658e-674d-484e-9644-7ae6086c65cb" (UID: "c058658e-674d-484e-9644-7ae6086c65cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.256930 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c058658e-674d-484e-9644-7ae6086c65cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.603474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c058658e-674d-484e-9644-7ae6086c65cb","Type":"ContainerDied","Data":"d49d4af52873f9660e619fd93804404271720c41baf06445113f97dd59617e82"} Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.603504 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.603637 4787 scope.go:117] "RemoveContainer" containerID="db03f06efd061ca3d64f50a5a46219dcd0c4b527d1c6fcc2c34c3c03c3905860" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.639227 4787 scope.go:117] "RemoveContainer" containerID="e047efe779197a87b585080d2ea585a483800deae7eba9595fd7f60f69eac941" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.652399 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.668186 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.680884 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:52 crc kubenswrapper[4787]: E0126 18:03:52.681758 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerName="init" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.681789 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerName="init" Jan 26 18:03:52 crc kubenswrapper[4787]: E0126 18:03:52.681807 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerName="dnsmasq-dns" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.681815 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerName="dnsmasq-dns" Jan 26 18:03:52 crc kubenswrapper[4787]: E0126 18:03:52.681842 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.681850 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api" Jan 26 18:03:52 crc kubenswrapper[4787]: E0126 18:03:52.681867 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api-log" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.681877 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api-log" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.682158 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.682184 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c058658e-674d-484e-9644-7ae6086c65cb" containerName="cinder-api-log" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.682201 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aecb16f-df4d-4138-8e65-b177e30c0530" containerName="dnsmasq-dns" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.683378 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.686868 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.687737 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.687914 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.714897 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769040 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769083 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28cf7\" (UniqueName: \"kubernetes.io/projected/44392b57-bc6b-4a8b-8ff3-346fab2422af-kube-api-access-28cf7\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769184 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data-custom\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769260 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769280 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44392b57-bc6b-4a8b-8ff3-346fab2422af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769302 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-public-tls-certs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769339 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44392b57-bc6b-4a8b-8ff3-346fab2422af-logs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.769401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-scripts\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.870801 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871004 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28cf7\" (UniqueName: \"kubernetes.io/projected/44392b57-bc6b-4a8b-8ff3-346fab2422af-kube-api-access-28cf7\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871141 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data-custom\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871306 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871345 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44392b57-bc6b-4a8b-8ff3-346fab2422af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-public-tls-certs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871453 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44392b57-bc6b-4a8b-8ff3-346fab2422af-logs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.871540 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-scripts\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.873337 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44392b57-bc6b-4a8b-8ff3-346fab2422af-logs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.873413 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44392b57-bc6b-4a8b-8ff3-346fab2422af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.876986 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-public-tls-certs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.877551 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-scripts\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.877860 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.878353 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data-custom\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.878916 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.882077 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:52 crc kubenswrapper[4787]: I0126 18:03:52.898304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28cf7\" (UniqueName: \"kubernetes.io/projected/44392b57-bc6b-4a8b-8ff3-346fab2422af-kube-api-access-28cf7\") pod \"cinder-api-0\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " pod="openstack/cinder-api-0" Jan 26 18:03:53 crc kubenswrapper[4787]: I0126 18:03:53.003883 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:03:53 crc kubenswrapper[4787]: I0126 18:03:53.506772 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:03:53 crc kubenswrapper[4787]: I0126 18:03:53.620809 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c058658e-674d-484e-9644-7ae6086c65cb" path="/var/lib/kubelet/pods/c058658e-674d-484e-9644-7ae6086c65cb/volumes" Jan 26 18:03:53 crc kubenswrapper[4787]: I0126 18:03:53.630655 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44392b57-bc6b-4a8b-8ff3-346fab2422af","Type":"ContainerStarted","Data":"b0287cc48fe8d9b4345591edc1a2e6fd0c5bb984ec158af735fff33eb8d4962d"} Jan 26 18:03:54 crc kubenswrapper[4787]: I0126 18:03:54.150018 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:54 crc kubenswrapper[4787]: I0126 18:03:54.213786 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:03:54 crc kubenswrapper[4787]: I0126 18:03:54.392527 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:03:54 crc kubenswrapper[4787]: I0126 18:03:54.643965 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44392b57-bc6b-4a8b-8ff3-346fab2422af","Type":"ContainerStarted","Data":"a522c77b7f2e449b1626089cb405e0a35925fb71dd7b4046bb28319da9af07ef"} Jan 26 18:03:54 crc kubenswrapper[4787]: I0126 18:03:54.998413 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:03:55 crc kubenswrapper[4787]: I0126 18:03:55.654144 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44392b57-bc6b-4a8b-8ff3-346fab2422af","Type":"ContainerStarted","Data":"fb5b68e300042cdeab819b0943f78da3bc13f53fb414daf36d4826013aa7717e"} Jan 26 18:03:55 crc kubenswrapper[4787]: I0126 18:03:55.654307 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.060410 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.079793 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.079771932 podStartE2EDuration="4.079771932s" podCreationTimestamp="2026-01-26 18:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:03:55.685696977 +0000 UTC m=+1204.392833110" watchObservedRunningTime="2026-01-26 18:03:56.079771932 +0000 UTC m=+1204.786908085" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.101053 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.140104 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.224306 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vrnz8"] Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.224598 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-vrnz8" podUID="678441af-ae6d-4142-91fc-450e171e9a35" containerName="dnsmasq-dns" containerID="cri-o://da352534067ba348e8ef838405f70e84e43e680315572845d971b6836eea7902" gracePeriod=10 Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.348135 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:45006->10.217.0.158:9311: read: connection reset by peer" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.348524 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:45000->10.217.0.158:9311: read: connection reset by peer" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.666043 4787 generic.go:334] "Generic (PLEG): container finished" podID="678441af-ae6d-4142-91fc-450e171e9a35" containerID="da352534067ba348e8ef838405f70e84e43e680315572845d971b6836eea7902" exitCode=0 Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.666080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vrnz8" event={"ID":"678441af-ae6d-4142-91fc-450e171e9a35","Type":"ContainerDied","Data":"da352534067ba348e8ef838405f70e84e43e680315572845d971b6836eea7902"} Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.668373 4787 generic.go:334] "Generic (PLEG): container finished" podID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerID="4bb48154bd80422e171eea8ec498dfcb6d8f829679360a23ea6d1d31b6313666" exitCode=0 Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.668443 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" event={"ID":"f2b04d42-79df-44db-93d0-c03e0e80b82b","Type":"ContainerDied","Data":"4bb48154bd80422e171eea8ec498dfcb6d8f829679360a23ea6d1d31b6313666"} Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.668625 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="cinder-scheduler" containerID="cri-o://c0fd9a57dbb14e6417f9d1146c62ebea0b133fb6cde099d6377e92b29d9120d8" gracePeriod=30 Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.668702 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="probe" containerID="cri-o://201c58bcacb82f327df09ae94b5879e5b312566dd56ef1c2ff5b3b39df63342c" gracePeriod=30 Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.819132 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86f4885877-fz869" Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.920484 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-559dbc6bdd-zjdgc"] Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.922260 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-559dbc6bdd-zjdgc" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-api" containerID="cri-o://acabc500671c7c41b523b361e65d9650715f61a36ff9a4f674f0f97eb8c49075" gracePeriod=30 Jan 26 18:03:56 crc kubenswrapper[4787]: I0126 18:03:56.924517 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-559dbc6bdd-zjdgc" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-httpd" containerID="cri-o://9948943997fcb0808d6aebe4e568012904e9afb2462c57558f70d6dc789d4415" gracePeriod=30 Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.026329 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.034844 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-nb\") pod \"678441af-ae6d-4142-91fc-450e171e9a35\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051442 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-combined-ca-bundle\") pod \"f2b04d42-79df-44db-93d0-c03e0e80b82b\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051469 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data-custom\") pod \"f2b04d42-79df-44db-93d0-c03e0e80b82b\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051510 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-config\") pod \"678441af-ae6d-4142-91fc-450e171e9a35\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051574 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-sb\") pod \"678441af-ae6d-4142-91fc-450e171e9a35\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051596 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj7tz\" (UniqueName: \"kubernetes.io/projected/678441af-ae6d-4142-91fc-450e171e9a35-kube-api-access-cj7tz\") pod \"678441af-ae6d-4142-91fc-450e171e9a35\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051680 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbblc\" (UniqueName: \"kubernetes.io/projected/f2b04d42-79df-44db-93d0-c03e0e80b82b-kube-api-access-wbblc\") pod \"f2b04d42-79df-44db-93d0-c03e0e80b82b\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051702 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-svc\") pod \"678441af-ae6d-4142-91fc-450e171e9a35\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051733 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data\") pod \"f2b04d42-79df-44db-93d0-c03e0e80b82b\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051763 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b04d42-79df-44db-93d0-c03e0e80b82b-logs\") pod \"f2b04d42-79df-44db-93d0-c03e0e80b82b\" (UID: \"f2b04d42-79df-44db-93d0-c03e0e80b82b\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.051787 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-swift-storage-0\") pod \"678441af-ae6d-4142-91fc-450e171e9a35\" (UID: \"678441af-ae6d-4142-91fc-450e171e9a35\") " Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.057880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2b04d42-79df-44db-93d0-c03e0e80b82b" (UID: "f2b04d42-79df-44db-93d0-c03e0e80b82b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.059048 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b04d42-79df-44db-93d0-c03e0e80b82b-logs" (OuterVolumeSpecName: "logs") pod "f2b04d42-79df-44db-93d0-c03e0e80b82b" (UID: "f2b04d42-79df-44db-93d0-c03e0e80b82b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.064760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b04d42-79df-44db-93d0-c03e0e80b82b-kube-api-access-wbblc" (OuterVolumeSpecName: "kube-api-access-wbblc") pod "f2b04d42-79df-44db-93d0-c03e0e80b82b" (UID: "f2b04d42-79df-44db-93d0-c03e0e80b82b"). InnerVolumeSpecName "kube-api-access-wbblc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.078000 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678441af-ae6d-4142-91fc-450e171e9a35-kube-api-access-cj7tz" (OuterVolumeSpecName: "kube-api-access-cj7tz") pod "678441af-ae6d-4142-91fc-450e171e9a35" (UID: "678441af-ae6d-4142-91fc-450e171e9a35"). InnerVolumeSpecName "kube-api-access-cj7tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.123219 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b04d42-79df-44db-93d0-c03e0e80b82b" (UID: "f2b04d42-79df-44db-93d0-c03e0e80b82b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.145314 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "678441af-ae6d-4142-91fc-450e171e9a35" (UID: "678441af-ae6d-4142-91fc-450e171e9a35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.153031 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.153066 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.153079 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj7tz\" (UniqueName: \"kubernetes.io/projected/678441af-ae6d-4142-91fc-450e171e9a35-kube-api-access-cj7tz\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.153093 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbblc\" (UniqueName: \"kubernetes.io/projected/f2b04d42-79df-44db-93d0-c03e0e80b82b-kube-api-access-wbblc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.153390 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.153414 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b04d42-79df-44db-93d0-c03e0e80b82b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.161896 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "678441af-ae6d-4142-91fc-450e171e9a35" (UID: "678441af-ae6d-4142-91fc-450e171e9a35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.173317 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data" (OuterVolumeSpecName: "config-data") pod "f2b04d42-79df-44db-93d0-c03e0e80b82b" (UID: "f2b04d42-79df-44db-93d0-c03e0e80b82b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.180236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-config" (OuterVolumeSpecName: "config") pod "678441af-ae6d-4142-91fc-450e171e9a35" (UID: "678441af-ae6d-4142-91fc-450e171e9a35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.193216 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "678441af-ae6d-4142-91fc-450e171e9a35" (UID: "678441af-ae6d-4142-91fc-450e171e9a35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.201361 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "678441af-ae6d-4142-91fc-450e171e9a35" (UID: "678441af-ae6d-4142-91fc-450e171e9a35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.254247 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b04d42-79df-44db-93d0-c03e0e80b82b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.254290 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.254302 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.254311 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.254319 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678441af-ae6d-4142-91fc-450e171e9a35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.677838 4787 generic.go:334] "Generic (PLEG): container finished" podID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerID="9948943997fcb0808d6aebe4e568012904e9afb2462c57558f70d6dc789d4415" exitCode=0 Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.678000 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559dbc6bdd-zjdgc" event={"ID":"ddde001c-cec7-43fc-8f4d-0d7153ec3911","Type":"ContainerDied","Data":"9948943997fcb0808d6aebe4e568012904e9afb2462c57558f70d6dc789d4415"} Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.684061 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.684912 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c5bf56f6d-9jlnz" event={"ID":"f2b04d42-79df-44db-93d0-c03e0e80b82b","Type":"ContainerDied","Data":"1d7630adbeeca0d31ebb62447f14af3e7a2093e27c385cdef82e1070951270a5"} Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.684975 4787 scope.go:117] "RemoveContainer" containerID="4bb48154bd80422e171eea8ec498dfcb6d8f829679360a23ea6d1d31b6313666" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.690417 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vrnz8" event={"ID":"678441af-ae6d-4142-91fc-450e171e9a35","Type":"ContainerDied","Data":"edd68a91662af87e7b9f68953f5c629c3b2aa0991cd0c81207d977230459e21b"} Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.690516 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vrnz8" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.694603 4787 generic.go:334] "Generic (PLEG): container finished" podID="2112d17d-e9c4-4384-86a9-a600503abd45" containerID="201c58bcacb82f327df09ae94b5879e5b312566dd56ef1c2ff5b3b39df63342c" exitCode=0 Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.694644 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2112d17d-e9c4-4384-86a9-a600503abd45","Type":"ContainerDied","Data":"201c58bcacb82f327df09ae94b5879e5b312566dd56ef1c2ff5b3b39df63342c"} Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.714612 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c5bf56f6d-9jlnz"] Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.730000 4787 scope.go:117] "RemoveContainer" containerID="e4ca64e1a76343ea3cc4fd2ad477de8be658b4143f37207870a497e65d791095" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.739968 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c5bf56f6d-9jlnz"] Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.751118 4787 scope.go:117] "RemoveContainer" containerID="da352534067ba348e8ef838405f70e84e43e680315572845d971b6836eea7902" Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.757012 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vrnz8"] Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.764101 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vrnz8"] Jan 26 18:03:57 crc kubenswrapper[4787]: I0126 18:03:57.778287 4787 scope.go:117] "RemoveContainer" containerID="ad807bc1dba590992256b350b9c28d09fe142075b4e540afbb3b4942797ff995" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.183998 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 18:03:58 crc kubenswrapper[4787]: E0126 18:03:58.184561 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api-log" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.184626 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api-log" Jan 26 18:03:58 crc kubenswrapper[4787]: E0126 18:03:58.184680 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678441af-ae6d-4142-91fc-450e171e9a35" containerName="dnsmasq-dns" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.184743 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="678441af-ae6d-4142-91fc-450e171e9a35" containerName="dnsmasq-dns" Jan 26 18:03:58 crc kubenswrapper[4787]: E0126 18:03:58.184799 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.184847 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api" Jan 26 18:03:58 crc kubenswrapper[4787]: E0126 18:03:58.184921 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678441af-ae6d-4142-91fc-450e171e9a35" containerName="init" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.184990 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="678441af-ae6d-4142-91fc-450e171e9a35" containerName="init" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.185209 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.185269 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" containerName="barbican-api-log" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.185323 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="678441af-ae6d-4142-91fc-450e171e9a35" containerName="dnsmasq-dns" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.185937 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.189737 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.190200 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.190400 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8s79x" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.216174 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.370293 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.370347 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.370393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrj7p\" (UniqueName: \"kubernetes.io/projected/cf362037-b6e5-4dce-8da1-698fd75ff850-kube-api-access-wrj7p\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.370440 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.471442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.471710 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.471752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrj7p\" (UniqueName: \"kubernetes.io/projected/cf362037-b6e5-4dce-8da1-698fd75ff850-kube-api-access-wrj7p\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.471794 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.473547 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.478675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.481485 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.506700 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrj7p\" (UniqueName: \"kubernetes.io/projected/cf362037-b6e5-4dce-8da1-698fd75ff850-kube-api-access-wrj7p\") pod \"openstackclient\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " pod="openstack/openstackclient" Jan 26 18:03:58 crc kubenswrapper[4787]: I0126 18:03:58.799441 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 18:03:59 crc kubenswrapper[4787]: I0126 18:03:59.265696 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 18:03:59 crc kubenswrapper[4787]: I0126 18:03:59.599727 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678441af-ae6d-4142-91fc-450e171e9a35" path="/var/lib/kubelet/pods/678441af-ae6d-4142-91fc-450e171e9a35/volumes" Jan 26 18:03:59 crc kubenswrapper[4787]: I0126 18:03:59.600382 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b04d42-79df-44db-93d0-c03e0e80b82b" path="/var/lib/kubelet/pods/f2b04d42-79df-44db-93d0-c03e0e80b82b/volumes" Jan 26 18:03:59 crc kubenswrapper[4787]: I0126 18:03:59.728963 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cf362037-b6e5-4dce-8da1-698fd75ff850","Type":"ContainerStarted","Data":"adde061d277266645f32a4898845248a6dca7499a9873e18be48ab331f6cc16a"} Jan 26 18:04:01 crc kubenswrapper[4787]: I0126 18:04:01.750760 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2112d17d-e9c4-4384-86a9-a600503abd45","Type":"ContainerDied","Data":"c0fd9a57dbb14e6417f9d1146c62ebea0b133fb6cde099d6377e92b29d9120d8"} Jan 26 18:04:01 crc kubenswrapper[4787]: I0126 18:04:01.750893 4787 generic.go:334] "Generic (PLEG): container finished" podID="2112d17d-e9c4-4384-86a9-a600503abd45" containerID="c0fd9a57dbb14e6417f9d1146c62ebea0b133fb6cde099d6377e92b29d9120d8" exitCode=0 Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.071613 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.241798 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-combined-ca-bundle\") pod \"2112d17d-e9c4-4384-86a9-a600503abd45\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.241878 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x97pr\" (UniqueName: \"kubernetes.io/projected/2112d17d-e9c4-4384-86a9-a600503abd45-kube-api-access-x97pr\") pod \"2112d17d-e9c4-4384-86a9-a600503abd45\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.241997 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data\") pod \"2112d17d-e9c4-4384-86a9-a600503abd45\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.242056 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data-custom\") pod \"2112d17d-e9c4-4384-86a9-a600503abd45\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.242113 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-scripts\") pod \"2112d17d-e9c4-4384-86a9-a600503abd45\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.242165 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2112d17d-e9c4-4384-86a9-a600503abd45-etc-machine-id\") pod \"2112d17d-e9c4-4384-86a9-a600503abd45\" (UID: \"2112d17d-e9c4-4384-86a9-a600503abd45\") " Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.242679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2112d17d-e9c4-4384-86a9-a600503abd45-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2112d17d-e9c4-4384-86a9-a600503abd45" (UID: "2112d17d-e9c4-4384-86a9-a600503abd45"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.248168 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-scripts" (OuterVolumeSpecName: "scripts") pod "2112d17d-e9c4-4384-86a9-a600503abd45" (UID: "2112d17d-e9c4-4384-86a9-a600503abd45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.249649 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2112d17d-e9c4-4384-86a9-a600503abd45" (UID: "2112d17d-e9c4-4384-86a9-a600503abd45"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.265230 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2112d17d-e9c4-4384-86a9-a600503abd45-kube-api-access-x97pr" (OuterVolumeSpecName: "kube-api-access-x97pr") pod "2112d17d-e9c4-4384-86a9-a600503abd45" (UID: "2112d17d-e9c4-4384-86a9-a600503abd45"). InnerVolumeSpecName "kube-api-access-x97pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.294032 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2112d17d-e9c4-4384-86a9-a600503abd45" (UID: "2112d17d-e9c4-4384-86a9-a600503abd45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.344433 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.344459 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.344468 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2112d17d-e9c4-4384-86a9-a600503abd45-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.344477 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.344486 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x97pr\" (UniqueName: \"kubernetes.io/projected/2112d17d-e9c4-4384-86a9-a600503abd45-kube-api-access-x97pr\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.393082 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data" (OuterVolumeSpecName: "config-data") pod "2112d17d-e9c4-4384-86a9-a600503abd45" (UID: "2112d17d-e9c4-4384-86a9-a600503abd45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.446735 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2112d17d-e9c4-4384-86a9-a600503abd45-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.761385 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2112d17d-e9c4-4384-86a9-a600503abd45","Type":"ContainerDied","Data":"bb0f9490c0568d666fe06b1bb75fc402d2d2b970317d41ef45f5610c3f358ce6"} Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.761443 4787 scope.go:117] "RemoveContainer" containerID="201c58bcacb82f327df09ae94b5879e5b312566dd56ef1c2ff5b3b39df63342c" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.761472 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.809602 4787 scope.go:117] "RemoveContainer" containerID="c0fd9a57dbb14e6417f9d1146c62ebea0b133fb6cde099d6377e92b29d9120d8" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.809729 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.844424 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.852399 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:04:02 crc kubenswrapper[4787]: E0126 18:04:02.852796 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="probe" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.852809 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="probe" Jan 26 18:04:02 crc kubenswrapper[4787]: E0126 18:04:02.852823 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="cinder-scheduler" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.852831 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="cinder-scheduler" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.853007 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="probe" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.853020 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" containerName="cinder-scheduler" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.855008 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.857383 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.866418 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.957185 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.957273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c9405f-74d3-4282-90ac-0a9909a68b43-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.957317 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.957338 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.957405 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwkp\" (UniqueName: \"kubernetes.io/projected/38c9405f-74d3-4282-90ac-0a9909a68b43-kube-api-access-rkwkp\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:02 crc kubenswrapper[4787]: I0126 18:04:02.957434 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059341 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059458 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkwkp\" (UniqueName: \"kubernetes.io/projected/38c9405f-74d3-4282-90ac-0a9909a68b43-kube-api-access-rkwkp\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059489 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059592 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059614 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c9405f-74d3-4282-90ac-0a9909a68b43-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.059694 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c9405f-74d3-4282-90ac-0a9909a68b43-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.063925 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.064743 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.065042 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.065483 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.081468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkwkp\" (UniqueName: \"kubernetes.io/projected/38c9405f-74d3-4282-90ac-0a9909a68b43-kube-api-access-rkwkp\") pod \"cinder-scheduler-0\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.207918 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.575457 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5dfdd88467-46gsc"] Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.576904 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.582382 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.589820 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.590934 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.619146 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2112d17d-e9c4-4384-86a9-a600503abd45" path="/var/lib/kubelet/pods/2112d17d-e9c4-4384-86a9-a600503abd45/volumes" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.619831 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5dfdd88467-46gsc"] Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670420 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-combined-ca-bundle\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670475 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-run-httpd\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670500 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-etc-swift\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670574 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-config-data\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670615 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-internal-tls-certs\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4s4\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-kube-api-access-xw4s4\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670664 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-log-httpd\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.670695 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-public-tls-certs\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.735403 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.772916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-etc-swift\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.772992 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-config-data\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773037 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-internal-tls-certs\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773056 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4s4\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-kube-api-access-xw4s4\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773088 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-log-httpd\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773113 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-public-tls-certs\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773353 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-combined-ca-bundle\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773373 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-run-httpd\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.773782 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-run-httpd\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.774842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-log-httpd\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.781182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-internal-tls-certs\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.782343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-combined-ca-bundle\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.782637 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c9405f-74d3-4282-90ac-0a9909a68b43","Type":"ContainerStarted","Data":"952d881ce88b2a48c39e97927a6e8de59c892815eecfc3a2709c9a25eafc8600"} Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.782809 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-etc-swift\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.783460 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-config-data\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.785502 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-public-tls-certs\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.833541 4787 generic.go:334] "Generic (PLEG): container finished" podID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerID="acabc500671c7c41b523b361e65d9650715f61a36ff9a4f674f0f97eb8c49075" exitCode=0 Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.833872 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559dbc6bdd-zjdgc" event={"ID":"ddde001c-cec7-43fc-8f4d-0d7153ec3911","Type":"ContainerDied","Data":"acabc500671c7c41b523b361e65d9650715f61a36ff9a4f674f0f97eb8c49075"} Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.848517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4s4\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-kube-api-access-xw4s4\") pod \"swift-proxy-5dfdd88467-46gsc\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.913394 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:03 crc kubenswrapper[4787]: I0126 18:04:03.965164 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.084940 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-ovndb-tls-certs\") pod \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.085129 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnfqr\" (UniqueName: \"kubernetes.io/projected/ddde001c-cec7-43fc-8f4d-0d7153ec3911-kube-api-access-cnfqr\") pod \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.085366 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-httpd-config\") pod \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.085395 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-combined-ca-bundle\") pod \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.085900 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-config\") pod \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\" (UID: \"ddde001c-cec7-43fc-8f4d-0d7153ec3911\") " Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.091418 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddde001c-cec7-43fc-8f4d-0d7153ec3911-kube-api-access-cnfqr" (OuterVolumeSpecName: "kube-api-access-cnfqr") pod "ddde001c-cec7-43fc-8f4d-0d7153ec3911" (UID: "ddde001c-cec7-43fc-8f4d-0d7153ec3911"). InnerVolumeSpecName "kube-api-access-cnfqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.095101 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ddde001c-cec7-43fc-8f4d-0d7153ec3911" (UID: "ddde001c-cec7-43fc-8f4d-0d7153ec3911"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.165694 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddde001c-cec7-43fc-8f4d-0d7153ec3911" (UID: "ddde001c-cec7-43fc-8f4d-0d7153ec3911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.186368 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-config" (OuterVolumeSpecName: "config") pod "ddde001c-cec7-43fc-8f4d-0d7153ec3911" (UID: "ddde001c-cec7-43fc-8f4d-0d7153ec3911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.188303 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.188334 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.188349 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.188359 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnfqr\" (UniqueName: \"kubernetes.io/projected/ddde001c-cec7-43fc-8f4d-0d7153ec3911-kube-api-access-cnfqr\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.190099 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ddde001c-cec7-43fc-8f4d-0d7153ec3911" (UID: "ddde001c-cec7-43fc-8f4d-0d7153ec3911"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.290306 4787 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddde001c-cec7-43fc-8f4d-0d7153ec3911-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.460936 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5dfdd88467-46gsc"] Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.864920 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-559dbc6bdd-zjdgc" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.865110 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559dbc6bdd-zjdgc" event={"ID":"ddde001c-cec7-43fc-8f4d-0d7153ec3911","Type":"ContainerDied","Data":"c54eb01fb3b68d96a1bd9263020efc0f03d6ed5618c6ac1cdc598e91145e7ff2"} Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.865369 4787 scope.go:117] "RemoveContainer" containerID="9948943997fcb0808d6aebe4e568012904e9afb2462c57558f70d6dc789d4415" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.874609 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c9405f-74d3-4282-90ac-0a9909a68b43","Type":"ContainerStarted","Data":"7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a"} Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.876633 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dfdd88467-46gsc" event={"ID":"4b418d30-17a4-4cd9-a16d-c10b1f030492","Type":"ContainerStarted","Data":"f119de1c64759eab74ed8d3728129ed6c4867af79ba4028686238074915d9c03"} Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.876743 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dfdd88467-46gsc" event={"ID":"4b418d30-17a4-4cd9-a16d-c10b1f030492","Type":"ContainerStarted","Data":"b5e1a7fb7bf10043d74490bc811cff07cff7938880445e0c2ec067feb074ccee"} Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.910021 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-559dbc6bdd-zjdgc"] Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.918085 4787 scope.go:117] "RemoveContainer" containerID="acabc500671c7c41b523b361e65d9650715f61a36ff9a4f674f0f97eb8c49075" Jan 26 18:04:04 crc kubenswrapper[4787]: I0126 18:04:04.918446 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-559dbc6bdd-zjdgc"] Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.563532 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.564217 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-central-agent" containerID="cri-o://6dc0f6877ede41acaf1a36c5fb026a85cc0f56d152af63dd0e82dd0ee31465ec" gracePeriod=30 Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.564318 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="sg-core" containerID="cri-o://97457590143af98c242b616f62ef4d708443ff2708ea9818f49cfac492a58a72" gracePeriod=30 Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.564318 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-notification-agent" containerID="cri-o://36970f4e2fd3be32ffcd77769bbc7ca0032cd78597279b4c25303ed3f53c92b6" gracePeriod=30 Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.564355 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="proxy-httpd" containerID="cri-o://20ce24b00e4a2879b62e27bdfd63933ada0a6adfa0e1272af42f81cf5ebd50b0" gracePeriod=30 Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.578101 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": EOF" Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.602241 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" path="/var/lib/kubelet/pods/ddde001c-cec7-43fc-8f4d-0d7153ec3911/volumes" Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.769911 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.915397 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerID="20ce24b00e4a2879b62e27bdfd63933ada0a6adfa0e1272af42f81cf5ebd50b0" exitCode=0 Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.915436 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerID="97457590143af98c242b616f62ef4d708443ff2708ea9818f49cfac492a58a72" exitCode=2 Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.915480 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerDied","Data":"20ce24b00e4a2879b62e27bdfd63933ada0a6adfa0e1272af42f81cf5ebd50b0"} Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.915509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerDied","Data":"97457590143af98c242b616f62ef4d708443ff2708ea9818f49cfac492a58a72"} Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.917255 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c9405f-74d3-4282-90ac-0a9909a68b43","Type":"ContainerStarted","Data":"a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d"} Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.922209 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dfdd88467-46gsc" event={"ID":"4b418d30-17a4-4cd9-a16d-c10b1f030492","Type":"ContainerStarted","Data":"9087d36e049ad60d022d43c8eeebecb1463be62c5936b387b3dae05e8f883648"} Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.923101 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.923138 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:05 crc kubenswrapper[4787]: I0126 18:04:05.961429 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5dfdd88467-46gsc" podStartSLOduration=2.961405813 podStartE2EDuration="2.961405813s" podCreationTimestamp="2026-01-26 18:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:05.956190887 +0000 UTC m=+1214.663327030" watchObservedRunningTime="2026-01-26 18:04:05.961405813 +0000 UTC m=+1214.668541946" Jan 26 18:04:06 crc kubenswrapper[4787]: I0126 18:04:06.938458 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerID="36970f4e2fd3be32ffcd77769bbc7ca0032cd78597279b4c25303ed3f53c92b6" exitCode=0 Jan 26 18:04:06 crc kubenswrapper[4787]: I0126 18:04:06.938512 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerID="6dc0f6877ede41acaf1a36c5fb026a85cc0f56d152af63dd0e82dd0ee31465ec" exitCode=0 Jan 26 18:04:06 crc kubenswrapper[4787]: I0126 18:04:06.939493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerDied","Data":"36970f4e2fd3be32ffcd77769bbc7ca0032cd78597279b4c25303ed3f53c92b6"} Jan 26 18:04:06 crc kubenswrapper[4787]: I0126 18:04:06.939564 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerDied","Data":"6dc0f6877ede41acaf1a36c5fb026a85cc0f56d152af63dd0e82dd0ee31465ec"} Jan 26 18:04:08 crc kubenswrapper[4787]: I0126 18:04:08.208738 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 18:04:10 crc kubenswrapper[4787]: I0126 18:04:10.732355 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": dial tcp 10.217.0.160:3000: connect: connection refused" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.280627 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.303955 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.30392553 podStartE2EDuration="9.30392553s" podCreationTimestamp="2026-01-26 18:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:05.985685268 +0000 UTC m=+1214.692821401" watchObservedRunningTime="2026-01-26 18:04:11.30392553 +0000 UTC m=+1220.011061663" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.442279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-config-data\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.442327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-scripts\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.442393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-sg-core-conf-yaml\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443189 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-log-httpd\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443328 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-run-httpd\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443391 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9n2d\" (UniqueName: \"kubernetes.io/projected/b8003b6b-1090-4019-8414-29ab3a393f0c-kube-api-access-x9n2d\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443530 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-combined-ca-bundle\") pod \"b8003b6b-1090-4019-8414-29ab3a393f0c\" (UID: \"b8003b6b-1090-4019-8414-29ab3a393f0c\") " Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443595 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443629 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443940 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.443965 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8003b6b-1090-4019-8414-29ab3a393f0c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.446356 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-scripts" (OuterVolumeSpecName: "scripts") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.454765 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8003b6b-1090-4019-8414-29ab3a393f0c-kube-api-access-x9n2d" (OuterVolumeSpecName: "kube-api-access-x9n2d") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "kube-api-access-x9n2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.480647 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.535670 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.545876 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.545902 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9n2d\" (UniqueName: \"kubernetes.io/projected/b8003b6b-1090-4019-8414-29ab3a393f0c-kube-api-access-x9n2d\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.545912 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.545922 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.550981 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-config-data" (OuterVolumeSpecName: "config-data") pod "b8003b6b-1090-4019-8414-29ab3a393f0c" (UID: "b8003b6b-1090-4019-8414-29ab3a393f0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.648014 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8003b6b-1090-4019-8414-29ab3a393f0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.997996 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8003b6b-1090-4019-8414-29ab3a393f0c","Type":"ContainerDied","Data":"0e801534a77501c1b50347a648dcc268ed7a563769ac980239114689dc20af68"} Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.998074 4787 scope.go:117] "RemoveContainer" containerID="20ce24b00e4a2879b62e27bdfd63933ada0a6adfa0e1272af42f81cf5ebd50b0" Jan 26 18:04:11 crc kubenswrapper[4787]: I0126 18:04:11.998250 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.002758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cf362037-b6e5-4dce-8da1-698fd75ff850","Type":"ContainerStarted","Data":"92ee99ec8f96ad8d93512ba959ebd6a2d327963c172a5fa70335f6edb958771d"} Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.020609 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.227689593 podStartE2EDuration="14.02059174s" podCreationTimestamp="2026-01-26 18:03:58 +0000 UTC" firstStartedPulling="2026-01-26 18:03:59.270883203 +0000 UTC m=+1207.978019336" lastFinishedPulling="2026-01-26 18:04:11.06378535 +0000 UTC m=+1219.770921483" observedRunningTime="2026-01-26 18:04:12.016781867 +0000 UTC m=+1220.723918000" watchObservedRunningTime="2026-01-26 18:04:12.02059174 +0000 UTC m=+1220.727727873" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.050399 4787 scope.go:117] "RemoveContainer" containerID="97457590143af98c242b616f62ef4d708443ff2708ea9818f49cfac492a58a72" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.054459 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.067377 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.076848 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:12 crc kubenswrapper[4787]: E0126 18:04:12.077539 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-httpd" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077568 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-httpd" Jan 26 18:04:12 crc kubenswrapper[4787]: E0126 18:04:12.077591 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-api" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077601 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-api" Jan 26 18:04:12 crc kubenswrapper[4787]: E0126 18:04:12.077617 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="proxy-httpd" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077625 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="proxy-httpd" Jan 26 18:04:12 crc kubenswrapper[4787]: E0126 18:04:12.077636 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-notification-agent" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077644 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-notification-agent" Jan 26 18:04:12 crc kubenswrapper[4787]: E0126 18:04:12.077678 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-central-agent" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077687 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-central-agent" Jan 26 18:04:12 crc kubenswrapper[4787]: E0126 18:04:12.077699 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="sg-core" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077708 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="sg-core" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077911 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-central-agent" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077927 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="sg-core" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077951 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="proxy-httpd" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077967 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-api" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.077979 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddde001c-cec7-43fc-8f4d-0d7153ec3911" containerName="neutron-httpd" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.078021 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" containerName="ceilometer-notification-agent" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.080022 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.083336 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.083547 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.087895 4787 scope.go:117] "RemoveContainer" containerID="36970f4e2fd3be32ffcd77769bbc7ca0032cd78597279b4c25303ed3f53c92b6" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.092818 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.136870 4787 scope.go:117] "RemoveContainer" containerID="6dc0f6877ede41acaf1a36c5fb026a85cc0f56d152af63dd0e82dd0ee31465ec" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.155934 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-log-httpd\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.156035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.157031 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-run-httpd\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.157102 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-scripts\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.157147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-config-data\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.157174 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cff92\" (UniqueName: \"kubernetes.io/projected/049748db-af29-4e1e-9666-17ad13383b6d-kube-api-access-cff92\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.157202 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.258815 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-log-httpd\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.258868 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.258919 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-run-httpd\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.258942 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-scripts\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.258979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-config-data\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.259024 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cff92\" (UniqueName: \"kubernetes.io/projected/049748db-af29-4e1e-9666-17ad13383b6d-kube-api-access-cff92\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.259046 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.259309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-log-httpd\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.259620 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-run-httpd\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.275178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.275474 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-scripts\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.277401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-config-data\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.277746 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.278824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cff92\" (UniqueName: \"kubernetes.io/projected/049748db-af29-4e1e-9666-17ad13383b6d-kube-api-access-cff92\") pod \"ceilometer-0\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.402315 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:12 crc kubenswrapper[4787]: I0126 18:04:12.862108 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:12 crc kubenswrapper[4787]: W0126 18:04:12.866052 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod049748db_af29_4e1e_9666_17ad13383b6d.slice/crio-dd11945dd47fe916628481b39295dab1fbeb2c4e426c4c17642d697ec3888876 WatchSource:0}: Error finding container dd11945dd47fe916628481b39295dab1fbeb2c4e426c4c17642d697ec3888876: Status 404 returned error can't find the container with id dd11945dd47fe916628481b39295dab1fbeb2c4e426c4c17642d697ec3888876 Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.011645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerStarted","Data":"dd11945dd47fe916628481b39295dab1fbeb2c4e426c4c17642d697ec3888876"} Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.409303 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.602609 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8003b6b-1090-4019-8414-29ab3a393f0c" path="/var/lib/kubelet/pods/b8003b6b-1090-4019-8414-29ab3a393f0c/volumes" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.717321 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vftnt"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.718677 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.727444 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vftnt"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.825989 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-k4b5g"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.827267 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.840038 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-558e-account-create-update-brzxq"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.842563 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.846602 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.857999 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k4b5g"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.868019 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-558e-account-create-update-brzxq"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.886924 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2123e1b-d624-44fa-9b74-b7379df16f9a-operator-scripts\") pod \"nova-api-db-create-vftnt\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.886998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2wc\" (UniqueName: \"kubernetes.io/projected/e2123e1b-d624-44fa-9b74-b7379df16f9a-kube-api-access-8s2wc\") pod \"nova-api-db-create-vftnt\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.924213 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9rdc5"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.925660 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.928521 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.929645 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.951494 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9rdc5"] Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.996993 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-operator-scripts\") pod \"nova-cell0-db-create-k4b5g\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997050 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c875d43e-8019-4eb4-8c37-d189a8eb0a01-operator-scripts\") pod \"nova-api-558e-account-create-update-brzxq\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997107 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dec399e-5b0d-4970-8e95-d17ec238f3a1-operator-scripts\") pod \"nova-cell1-db-create-9rdc5\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997128 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkj4r\" (UniqueName: \"kubernetes.io/projected/c875d43e-8019-4eb4-8c37-d189a8eb0a01-kube-api-access-dkj4r\") pod \"nova-api-558e-account-create-update-brzxq\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997183 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5vm\" (UniqueName: \"kubernetes.io/projected/2dec399e-5b0d-4970-8e95-d17ec238f3a1-kube-api-access-4b5vm\") pod \"nova-cell1-db-create-9rdc5\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6md98\" (UniqueName: \"kubernetes.io/projected/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-kube-api-access-6md98\") pod \"nova-cell0-db-create-k4b5g\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2123e1b-d624-44fa-9b74-b7379df16f9a-operator-scripts\") pod \"nova-api-db-create-vftnt\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:13 crc kubenswrapper[4787]: I0126 18:04:13.997263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2wc\" (UniqueName: \"kubernetes.io/projected/e2123e1b-d624-44fa-9b74-b7379df16f9a-kube-api-access-8s2wc\") pod \"nova-api-db-create-vftnt\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.000327 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2123e1b-d624-44fa-9b74-b7379df16f9a-operator-scripts\") pod \"nova-api-db-create-vftnt\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.029556 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2wc\" (UniqueName: \"kubernetes.io/projected/e2123e1b-d624-44fa-9b74-b7379df16f9a-kube-api-access-8s2wc\") pod \"nova-api-db-create-vftnt\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.037892 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.040372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerStarted","Data":"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a"} Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.040466 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-qttc5"] Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.042705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.050183 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-qttc5"] Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.051140 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.099181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c875d43e-8019-4eb4-8c37-d189a8eb0a01-operator-scripts\") pod \"nova-api-558e-account-create-update-brzxq\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.099278 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dec399e-5b0d-4970-8e95-d17ec238f3a1-operator-scripts\") pod \"nova-cell1-db-create-9rdc5\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.099308 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkj4r\" (UniqueName: \"kubernetes.io/projected/c875d43e-8019-4eb4-8c37-d189a8eb0a01-kube-api-access-dkj4r\") pod \"nova-api-558e-account-create-update-brzxq\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.099427 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b5vm\" (UniqueName: \"kubernetes.io/projected/2dec399e-5b0d-4970-8e95-d17ec238f3a1-kube-api-access-4b5vm\") pod \"nova-cell1-db-create-9rdc5\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.099473 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6md98\" (UniqueName: \"kubernetes.io/projected/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-kube-api-access-6md98\") pod \"nova-cell0-db-create-k4b5g\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.099546 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-operator-scripts\") pod \"nova-cell0-db-create-k4b5g\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.102494 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c875d43e-8019-4eb4-8c37-d189a8eb0a01-operator-scripts\") pod \"nova-api-558e-account-create-update-brzxq\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.103147 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dec399e-5b0d-4970-8e95-d17ec238f3a1-operator-scripts\") pod \"nova-cell1-db-create-9rdc5\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.103844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-operator-scripts\") pod \"nova-cell0-db-create-k4b5g\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.129689 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b5vm\" (UniqueName: \"kubernetes.io/projected/2dec399e-5b0d-4970-8e95-d17ec238f3a1-kube-api-access-4b5vm\") pod \"nova-cell1-db-create-9rdc5\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.131332 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkj4r\" (UniqueName: \"kubernetes.io/projected/c875d43e-8019-4eb4-8c37-d189a8eb0a01-kube-api-access-dkj4r\") pod \"nova-api-558e-account-create-update-brzxq\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.132957 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6md98\" (UniqueName: \"kubernetes.io/projected/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-kube-api-access-6md98\") pod \"nova-cell0-db-create-k4b5g\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.165392 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.174376 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.205139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7s9r\" (UniqueName: \"kubernetes.io/projected/67c4904e-d492-4cb7-ba9f-afa2bd393aca-kube-api-access-m7s9r\") pod \"nova-cell0-5fc4-account-create-update-qttc5\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.205577 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c4904e-d492-4cb7-ba9f-afa2bd393aca-operator-scripts\") pod \"nova-cell0-5fc4-account-create-update-qttc5\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.237147 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-571d-account-create-update-vbsms"] Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.252599 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.260854 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.264727 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-vbsms"] Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.271479 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.307190 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c4904e-d492-4cb7-ba9f-afa2bd393aca-operator-scripts\") pod \"nova-cell0-5fc4-account-create-update-qttc5\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.307261 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7s9r\" (UniqueName: \"kubernetes.io/projected/67c4904e-d492-4cb7-ba9f-afa2bd393aca-kube-api-access-m7s9r\") pod \"nova-cell0-5fc4-account-create-update-qttc5\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.308323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c4904e-d492-4cb7-ba9f-afa2bd393aca-operator-scripts\") pod \"nova-cell0-5fc4-account-create-update-qttc5\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.333618 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7s9r\" (UniqueName: \"kubernetes.io/projected/67c4904e-d492-4cb7-ba9f-afa2bd393aca-kube-api-access-m7s9r\") pod \"nova-cell0-5fc4-account-create-update-qttc5\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.408679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb8c355-e367-439f-8584-7cf7a80fcc79-operator-scripts\") pod \"nova-cell1-571d-account-create-update-vbsms\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.408739 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbw8p\" (UniqueName: \"kubernetes.io/projected/7cb8c355-e367-439f-8584-7cf7a80fcc79-kube-api-access-lbw8p\") pod \"nova-cell1-571d-account-create-update-vbsms\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.510343 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb8c355-e367-439f-8584-7cf7a80fcc79-operator-scripts\") pod \"nova-cell1-571d-account-create-update-vbsms\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.510427 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbw8p\" (UniqueName: \"kubernetes.io/projected/7cb8c355-e367-439f-8584-7cf7a80fcc79-kube-api-access-lbw8p\") pod \"nova-cell1-571d-account-create-update-vbsms\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.511453 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb8c355-e367-439f-8584-7cf7a80fcc79-operator-scripts\") pod \"nova-cell1-571d-account-create-update-vbsms\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.529863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbw8p\" (UniqueName: \"kubernetes.io/projected/7cb8c355-e367-439f-8584-7cf7a80fcc79-kube-api-access-lbw8p\") pod \"nova-cell1-571d-account-create-update-vbsms\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.592090 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.611883 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.654490 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vftnt"] Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.743883 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k4b5g"] Jan 26 18:04:14 crc kubenswrapper[4787]: W0126 18:04:14.767545 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc96eb47d_13fc_4a65_9fe6_292dda4b1fec.slice/crio-088a869c4f6721e9ddf09171cdeffee442d089352045453101230fcdad29674f WatchSource:0}: Error finding container 088a869c4f6721e9ddf09171cdeffee442d089352045453101230fcdad29674f: Status 404 returned error can't find the container with id 088a869c4f6721e9ddf09171cdeffee442d089352045453101230fcdad29674f Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.838385 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9rdc5"] Jan 26 18:04:14 crc kubenswrapper[4787]: I0126 18:04:14.858620 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-558e-account-create-update-brzxq"] Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.088291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9rdc5" event={"ID":"2dec399e-5b0d-4970-8e95-d17ec238f3a1","Type":"ContainerStarted","Data":"627b8fc30df881b9d9178cf759ab37587f2b51985626326c6d9624d802435f27"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.096386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-558e-account-create-update-brzxq" event={"ID":"c875d43e-8019-4eb4-8c37-d189a8eb0a01","Type":"ContainerStarted","Data":"60aea5abbfb1a2414dc12532f3428fbda5c9a44f3ef912ebd443baa5299e0b35"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.103802 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k4b5g" event={"ID":"c96eb47d-13fc-4a65-9fe6-292dda4b1fec","Type":"ContainerStarted","Data":"08e360a78719e37fe96e2bfc968619b76835668a51ac0ddd8e89f91dc1196b73"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.103846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k4b5g" event={"ID":"c96eb47d-13fc-4a65-9fe6-292dda4b1fec","Type":"ContainerStarted","Data":"088a869c4f6721e9ddf09171cdeffee442d089352045453101230fcdad29674f"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.109913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vftnt" event={"ID":"e2123e1b-d624-44fa-9b74-b7379df16f9a","Type":"ContainerStarted","Data":"94dee20b5cbdda7c42538297c47887fe1a7cfe76a2625674d2e197527c1747f1"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.109966 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vftnt" event={"ID":"e2123e1b-d624-44fa-9b74-b7379df16f9a","Type":"ContainerStarted","Data":"b6f221700a30fe7adcf554f17849a599489c0fc9b185ed3489b865aad8fd18be"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.119846 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-k4b5g" podStartSLOduration=2.119830056 podStartE2EDuration="2.119830056s" podCreationTimestamp="2026-01-26 18:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:15.118146985 +0000 UTC m=+1223.825283118" watchObservedRunningTime="2026-01-26 18:04:15.119830056 +0000 UTC m=+1223.826966179" Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.124911 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerStarted","Data":"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2"} Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.168133 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vftnt" podStartSLOduration=2.168111607 podStartE2EDuration="2.168111607s" podCreationTimestamp="2026-01-26 18:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:15.13472298 +0000 UTC m=+1223.841859113" watchObservedRunningTime="2026-01-26 18:04:15.168111607 +0000 UTC m=+1223.875247740" Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.184177 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-qttc5"] Jan 26 18:04:15 crc kubenswrapper[4787]: I0126 18:04:15.268262 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-vbsms"] Jan 26 18:04:15 crc kubenswrapper[4787]: W0126 18:04:15.269319 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cb8c355_e367_439f_8584_7cf7a80fcc79.slice/crio-279fe875a28d7a698001610430a5db53afdf757d5c14fdf8947c2aa19c510e99 WatchSource:0}: Error finding container 279fe875a28d7a698001610430a5db53afdf757d5c14fdf8947c2aa19c510e99: Status 404 returned error can't find the container with id 279fe875a28d7a698001610430a5db53afdf757d5c14fdf8947c2aa19c510e99 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.144383 4787 generic.go:334] "Generic (PLEG): container finished" podID="67c4904e-d492-4cb7-ba9f-afa2bd393aca" containerID="b1a2c288c6624eaedffb8037540cd342391990cff20c82756f24c24e53f09171" exitCode=0 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.144531 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" event={"ID":"67c4904e-d492-4cb7-ba9f-afa2bd393aca","Type":"ContainerDied","Data":"b1a2c288c6624eaedffb8037540cd342391990cff20c82756f24c24e53f09171"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.144739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" event={"ID":"67c4904e-d492-4cb7-ba9f-afa2bd393aca","Type":"ContainerStarted","Data":"90bdbf92ba552c923d83a28fa06554b060ff4f9fb623a1fdaa3c89dc28ea332f"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.148639 4787 generic.go:334] "Generic (PLEG): container finished" podID="c96eb47d-13fc-4a65-9fe6-292dda4b1fec" containerID="08e360a78719e37fe96e2bfc968619b76835668a51ac0ddd8e89f91dc1196b73" exitCode=0 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.148698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k4b5g" event={"ID":"c96eb47d-13fc-4a65-9fe6-292dda4b1fec","Type":"ContainerDied","Data":"08e360a78719e37fe96e2bfc968619b76835668a51ac0ddd8e89f91dc1196b73"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.151010 4787 generic.go:334] "Generic (PLEG): container finished" podID="e2123e1b-d624-44fa-9b74-b7379df16f9a" containerID="94dee20b5cbdda7c42538297c47887fe1a7cfe76a2625674d2e197527c1747f1" exitCode=0 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.151080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vftnt" event={"ID":"e2123e1b-d624-44fa-9b74-b7379df16f9a","Type":"ContainerDied","Data":"94dee20b5cbdda7c42538297c47887fe1a7cfe76a2625674d2e197527c1747f1"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.153528 4787 generic.go:334] "Generic (PLEG): container finished" podID="7cb8c355-e367-439f-8584-7cf7a80fcc79" containerID="087dd7ca5bbeddf0bb9167f42e29d72ee8a272d4bb2fa7dd871e63f2482d1b6c" exitCode=0 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.153589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-571d-account-create-update-vbsms" event={"ID":"7cb8c355-e367-439f-8584-7cf7a80fcc79","Type":"ContainerDied","Data":"087dd7ca5bbeddf0bb9167f42e29d72ee8a272d4bb2fa7dd871e63f2482d1b6c"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.153609 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-571d-account-create-update-vbsms" event={"ID":"7cb8c355-e367-439f-8584-7cf7a80fcc79","Type":"ContainerStarted","Data":"279fe875a28d7a698001610430a5db53afdf757d5c14fdf8947c2aa19c510e99"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.157920 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerStarted","Data":"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.162088 4787 generic.go:334] "Generic (PLEG): container finished" podID="2dec399e-5b0d-4970-8e95-d17ec238f3a1" containerID="bbdd4c25ae774f53e0764a108142f958cb8108c512e8edfe05fab546f2783f62" exitCode=0 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.162163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9rdc5" event={"ID":"2dec399e-5b0d-4970-8e95-d17ec238f3a1","Type":"ContainerDied","Data":"bbdd4c25ae774f53e0764a108142f958cb8108c512e8edfe05fab546f2783f62"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.164374 4787 generic.go:334] "Generic (PLEG): container finished" podID="c875d43e-8019-4eb4-8c37-d189a8eb0a01" containerID="cb66dd748177ebf9c4c17c1f841026be3750c54c16f6cd4b8dca41c495275e49" exitCode=0 Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.164413 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-558e-account-create-update-brzxq" event={"ID":"c875d43e-8019-4eb4-8c37-d189a8eb0a01","Type":"ContainerDied","Data":"cb66dd748177ebf9c4c17c1f841026be3750c54c16f6cd4b8dca41c495275e49"} Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.808342 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:04:16 crc kubenswrapper[4787]: I0126 18:04:16.808628 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.179867 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerStarted","Data":"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398"} Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.223251 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.439320094 podStartE2EDuration="5.223225158s" podCreationTimestamp="2026-01-26 18:04:12 +0000 UTC" firstStartedPulling="2026-01-26 18:04:12.868495849 +0000 UTC m=+1221.575631982" lastFinishedPulling="2026-01-26 18:04:16.652400913 +0000 UTC m=+1225.359537046" observedRunningTime="2026-01-26 18:04:17.210416985 +0000 UTC m=+1225.917553128" watchObservedRunningTime="2026-01-26 18:04:17.223225158 +0000 UTC m=+1225.930361291" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.695756 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.777973 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkj4r\" (UniqueName: \"kubernetes.io/projected/c875d43e-8019-4eb4-8c37-d189a8eb0a01-kube-api-access-dkj4r\") pod \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.778076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c875d43e-8019-4eb4-8c37-d189a8eb0a01-operator-scripts\") pod \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\" (UID: \"c875d43e-8019-4eb4-8c37-d189a8eb0a01\") " Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.779322 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c875d43e-8019-4eb4-8c37-d189a8eb0a01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c875d43e-8019-4eb4-8c37-d189a8eb0a01" (UID: "c875d43e-8019-4eb4-8c37-d189a8eb0a01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.786901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c875d43e-8019-4eb4-8c37-d189a8eb0a01-kube-api-access-dkj4r" (OuterVolumeSpecName: "kube-api-access-dkj4r") pod "c875d43e-8019-4eb4-8c37-d189a8eb0a01" (UID: "c875d43e-8019-4eb4-8c37-d189a8eb0a01"). InnerVolumeSpecName "kube-api-access-dkj4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.880593 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkj4r\" (UniqueName: \"kubernetes.io/projected/c875d43e-8019-4eb4-8c37-d189a8eb0a01-kube-api-access-dkj4r\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.880630 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c875d43e-8019-4eb4-8c37-d189a8eb0a01-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.958540 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.966167 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.974779 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:17 crc kubenswrapper[4787]: I0126 18:04:17.996315 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.002208 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.082672 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-operator-scripts\") pod \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.082732 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2wc\" (UniqueName: \"kubernetes.io/projected/e2123e1b-d624-44fa-9b74-b7379df16f9a-kube-api-access-8s2wc\") pod \"e2123e1b-d624-44fa-9b74-b7379df16f9a\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.082797 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7s9r\" (UniqueName: \"kubernetes.io/projected/67c4904e-d492-4cb7-ba9f-afa2bd393aca-kube-api-access-m7s9r\") pod \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.082852 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2123e1b-d624-44fa-9b74-b7379df16f9a-operator-scripts\") pod \"e2123e1b-d624-44fa-9b74-b7379df16f9a\" (UID: \"e2123e1b-d624-44fa-9b74-b7379df16f9a\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.082876 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dec399e-5b0d-4970-8e95-d17ec238f3a1-operator-scripts\") pod \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.082893 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6md98\" (UniqueName: \"kubernetes.io/projected/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-kube-api-access-6md98\") pod \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\" (UID: \"c96eb47d-13fc-4a65-9fe6-292dda4b1fec\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083401 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbw8p\" (UniqueName: \"kubernetes.io/projected/7cb8c355-e367-439f-8584-7cf7a80fcc79-kube-api-access-lbw8p\") pod \"7cb8c355-e367-439f-8584-7cf7a80fcc79\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083245 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c96eb47d-13fc-4a65-9fe6-292dda4b1fec" (UID: "c96eb47d-13fc-4a65-9fe6-292dda4b1fec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083341 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dec399e-5b0d-4970-8e95-d17ec238f3a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dec399e-5b0d-4970-8e95-d17ec238f3a1" (UID: "2dec399e-5b0d-4970-8e95-d17ec238f3a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083341 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2123e1b-d624-44fa-9b74-b7379df16f9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2123e1b-d624-44fa-9b74-b7379df16f9a" (UID: "e2123e1b-d624-44fa-9b74-b7379df16f9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083480 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b5vm\" (UniqueName: \"kubernetes.io/projected/2dec399e-5b0d-4970-8e95-d17ec238f3a1-kube-api-access-4b5vm\") pod \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\" (UID: \"2dec399e-5b0d-4970-8e95-d17ec238f3a1\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083504 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c4904e-d492-4cb7-ba9f-afa2bd393aca-operator-scripts\") pod \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\" (UID: \"67c4904e-d492-4cb7-ba9f-afa2bd393aca\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.083991 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb8c355-e367-439f-8584-7cf7a80fcc79-operator-scripts\") pod \"7cb8c355-e367-439f-8584-7cf7a80fcc79\" (UID: \"7cb8c355-e367-439f-8584-7cf7a80fcc79\") " Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.084003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c4904e-d492-4cb7-ba9f-afa2bd393aca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67c4904e-d492-4cb7-ba9f-afa2bd393aca" (UID: "67c4904e-d492-4cb7-ba9f-afa2bd393aca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.084378 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.084396 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2123e1b-d624-44fa-9b74-b7379df16f9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.084409 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dec399e-5b0d-4970-8e95-d17ec238f3a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.084403 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb8c355-e367-439f-8584-7cf7a80fcc79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cb8c355-e367-439f-8584-7cf7a80fcc79" (UID: "7cb8c355-e367-439f-8584-7cf7a80fcc79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.084417 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c4904e-d492-4cb7-ba9f-afa2bd393aca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.086519 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dec399e-5b0d-4970-8e95-d17ec238f3a1-kube-api-access-4b5vm" (OuterVolumeSpecName: "kube-api-access-4b5vm") pod "2dec399e-5b0d-4970-8e95-d17ec238f3a1" (UID: "2dec399e-5b0d-4970-8e95-d17ec238f3a1"). InnerVolumeSpecName "kube-api-access-4b5vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.086590 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2123e1b-d624-44fa-9b74-b7379df16f9a-kube-api-access-8s2wc" (OuterVolumeSpecName: "kube-api-access-8s2wc") pod "e2123e1b-d624-44fa-9b74-b7379df16f9a" (UID: "e2123e1b-d624-44fa-9b74-b7379df16f9a"). InnerVolumeSpecName "kube-api-access-8s2wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.087060 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb8c355-e367-439f-8584-7cf7a80fcc79-kube-api-access-lbw8p" (OuterVolumeSpecName: "kube-api-access-lbw8p") pod "7cb8c355-e367-439f-8584-7cf7a80fcc79" (UID: "7cb8c355-e367-439f-8584-7cf7a80fcc79"). InnerVolumeSpecName "kube-api-access-lbw8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.087417 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-kube-api-access-6md98" (OuterVolumeSpecName: "kube-api-access-6md98") pod "c96eb47d-13fc-4a65-9fe6-292dda4b1fec" (UID: "c96eb47d-13fc-4a65-9fe6-292dda4b1fec"). InnerVolumeSpecName "kube-api-access-6md98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.087565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c4904e-d492-4cb7-ba9f-afa2bd393aca-kube-api-access-m7s9r" (OuterVolumeSpecName: "kube-api-access-m7s9r") pod "67c4904e-d492-4cb7-ba9f-afa2bd393aca" (UID: "67c4904e-d492-4cb7-ba9f-afa2bd393aca"). InnerVolumeSpecName "kube-api-access-m7s9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.186341 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2wc\" (UniqueName: \"kubernetes.io/projected/e2123e1b-d624-44fa-9b74-b7379df16f9a-kube-api-access-8s2wc\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.186371 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7s9r\" (UniqueName: \"kubernetes.io/projected/67c4904e-d492-4cb7-ba9f-afa2bd393aca-kube-api-access-m7s9r\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.186382 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6md98\" (UniqueName: \"kubernetes.io/projected/c96eb47d-13fc-4a65-9fe6-292dda4b1fec-kube-api-access-6md98\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.186391 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbw8p\" (UniqueName: \"kubernetes.io/projected/7cb8c355-e367-439f-8584-7cf7a80fcc79-kube-api-access-lbw8p\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.186399 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b5vm\" (UniqueName: \"kubernetes.io/projected/2dec399e-5b0d-4970-8e95-d17ec238f3a1-kube-api-access-4b5vm\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.186410 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb8c355-e367-439f-8584-7cf7a80fcc79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.190346 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-571d-account-create-update-vbsms" event={"ID":"7cb8c355-e367-439f-8584-7cf7a80fcc79","Type":"ContainerDied","Data":"279fe875a28d7a698001610430a5db53afdf757d5c14fdf8947c2aa19c510e99"} Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.190387 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279fe875a28d7a698001610430a5db53afdf757d5c14fdf8947c2aa19c510e99" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.190362 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-vbsms" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.191978 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9rdc5" event={"ID":"2dec399e-5b0d-4970-8e95-d17ec238f3a1","Type":"ContainerDied","Data":"627b8fc30df881b9d9178cf759ab37587f2b51985626326c6d9624d802435f27"} Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.192031 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627b8fc30df881b9d9178cf759ab37587f2b51985626326c6d9624d802435f27" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.191987 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9rdc5" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.193379 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-brzxq" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.193377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-558e-account-create-update-brzxq" event={"ID":"c875d43e-8019-4eb4-8c37-d189a8eb0a01","Type":"ContainerDied","Data":"60aea5abbfb1a2414dc12532f3428fbda5c9a44f3ef912ebd443baa5299e0b35"} Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.193562 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60aea5abbfb1a2414dc12532f3428fbda5c9a44f3ef912ebd443baa5299e0b35" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.195024 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" event={"ID":"67c4904e-d492-4cb7-ba9f-afa2bd393aca","Type":"ContainerDied","Data":"90bdbf92ba552c923d83a28fa06554b060ff4f9fb623a1fdaa3c89dc28ea332f"} Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.195063 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90bdbf92ba552c923d83a28fa06554b060ff4f9fb623a1fdaa3c89dc28ea332f" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.195035 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-qttc5" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.196891 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k4b5g" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.196888 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k4b5g" event={"ID":"c96eb47d-13fc-4a65-9fe6-292dda4b1fec","Type":"ContainerDied","Data":"088a869c4f6721e9ddf09171cdeffee442d089352045453101230fcdad29674f"} Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.197005 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="088a869c4f6721e9ddf09171cdeffee442d089352045453101230fcdad29674f" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.198775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vftnt" event={"ID":"e2123e1b-d624-44fa-9b74-b7379df16f9a","Type":"ContainerDied","Data":"b6f221700a30fe7adcf554f17849a599489c0fc9b185ed3489b865aad8fd18be"} Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.198815 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f221700a30fe7adcf554f17849a599489c0fc9b185ed3489b865aad8fd18be" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.198820 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vftnt" Jan 26 18:04:18 crc kubenswrapper[4787]: I0126 18:04:18.199256 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.362800 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qpg78"] Jan 26 18:04:19 crc kubenswrapper[4787]: E0126 18:04:19.363794 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96eb47d-13fc-4a65-9fe6-292dda4b1fec" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.363814 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96eb47d-13fc-4a65-9fe6-292dda4b1fec" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: E0126 18:04:19.363833 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dec399e-5b0d-4970-8e95-d17ec238f3a1" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.363842 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dec399e-5b0d-4970-8e95-d17ec238f3a1" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: E0126 18:04:19.363863 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4904e-d492-4cb7-ba9f-afa2bd393aca" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.363871 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4904e-d492-4cb7-ba9f-afa2bd393aca" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: E0126 18:04:19.363885 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2123e1b-d624-44fa-9b74-b7379df16f9a" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.363894 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2123e1b-d624-44fa-9b74-b7379df16f9a" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: E0126 18:04:19.363906 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c875d43e-8019-4eb4-8c37-d189a8eb0a01" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.363914 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c875d43e-8019-4eb4-8c37-d189a8eb0a01" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: E0126 18:04:19.363931 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb8c355-e367-439f-8584-7cf7a80fcc79" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.363938 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb8c355-e367-439f-8584-7cf7a80fcc79" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364157 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2123e1b-d624-44fa-9b74-b7379df16f9a" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364174 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96eb47d-13fc-4a65-9fe6-292dda4b1fec" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364187 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb8c355-e367-439f-8584-7cf7a80fcc79" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364198 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c875d43e-8019-4eb4-8c37-d189a8eb0a01" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364223 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dec399e-5b0d-4970-8e95-d17ec238f3a1" containerName="mariadb-database-create" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364238 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c4904e-d492-4cb7-ba9f-afa2bd393aca" containerName="mariadb-account-create-update" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.364932 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.367758 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.367786 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-95sz6" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.367817 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.379241 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qpg78"] Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.520334 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-scripts\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.520442 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-config-data\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.520613 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzx55\" (UniqueName: \"kubernetes.io/projected/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-kube-api-access-xzx55\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.520721 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.622349 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-scripts\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.622402 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-config-data\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.622472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzx55\" (UniqueName: \"kubernetes.io/projected/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-kube-api-access-xzx55\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.622506 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.628784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-scripts\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.630653 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-config-data\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.641353 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.644329 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzx55\" (UniqueName: \"kubernetes.io/projected/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-kube-api-access-xzx55\") pod \"nova-cell0-conductor-db-sync-qpg78\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:19 crc kubenswrapper[4787]: I0126 18:04:19.694280 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:20 crc kubenswrapper[4787]: I0126 18:04:20.172355 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qpg78"] Jan 26 18:04:20 crc kubenswrapper[4787]: I0126 18:04:20.224556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qpg78" event={"ID":"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7","Type":"ContainerStarted","Data":"a9eae027ea78e74b376a0798a9bfc04ed46f639cf78c049a5ed982b1284527f8"} Jan 26 18:04:21 crc kubenswrapper[4787]: I0126 18:04:21.756613 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:04:21 crc kubenswrapper[4787]: I0126 18:04:21.757459 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-httpd" containerID="cri-o://ef6ddf453382e486a5a257bd406f89a0335e45350885d60d9c024a0b7b2ded47" gracePeriod=30 Jan 26 18:04:21 crc kubenswrapper[4787]: I0126 18:04:21.758501 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-log" containerID="cri-o://0f1cd29082fb000604fcb6b4fffed07a1c3618edde8d45607d7bd88655d15cab" gracePeriod=30 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.005200 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.005562 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="proxy-httpd" containerID="cri-o://2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398" gracePeriod=30 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.005740 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="sg-core" containerID="cri-o://94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361" gracePeriod=30 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.005797 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-notification-agent" containerID="cri-o://9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2" gracePeriod=30 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.005498 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-central-agent" containerID="cri-o://6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a" gracePeriod=30 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.248995 4787 generic.go:334] "Generic (PLEG): container finished" podID="049748db-af29-4e1e-9666-17ad13383b6d" containerID="2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398" exitCode=0 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.249038 4787 generic.go:334] "Generic (PLEG): container finished" podID="049748db-af29-4e1e-9666-17ad13383b6d" containerID="94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361" exitCode=2 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.249040 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerDied","Data":"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398"} Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.249082 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerDied","Data":"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361"} Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.251697 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerID="0f1cd29082fb000604fcb6b4fffed07a1c3618edde8d45607d7bd88655d15cab" exitCode=143 Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.251736 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a8f5f31f-b089-4fff-a501-700527b53ae7","Type":"ContainerDied","Data":"0f1cd29082fb000604fcb6b4fffed07a1c3618edde8d45607d7bd88655d15cab"} Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.710834 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.887218 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-sg-core-conf-yaml\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.887462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-combined-ca-bundle\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.887503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-log-httpd\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.887553 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-scripts\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.887577 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-config-data\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.888251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.888369 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-run-httpd\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.888452 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cff92\" (UniqueName: \"kubernetes.io/projected/049748db-af29-4e1e-9666-17ad13383b6d-kube-api-access-cff92\") pod \"049748db-af29-4e1e-9666-17ad13383b6d\" (UID: \"049748db-af29-4e1e-9666-17ad13383b6d\") " Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.888784 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.889241 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.889262 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/049748db-af29-4e1e-9666-17ad13383b6d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.892344 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049748db-af29-4e1e-9666-17ad13383b6d-kube-api-access-cff92" (OuterVolumeSpecName: "kube-api-access-cff92") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "kube-api-access-cff92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.893090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-scripts" (OuterVolumeSpecName: "scripts") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.916098 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.962583 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.991092 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cff92\" (UniqueName: \"kubernetes.io/projected/049748db-af29-4e1e-9666-17ad13383b6d-kube-api-access-cff92\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.991122 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.991142 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:22 crc kubenswrapper[4787]: I0126 18:04:22.991150 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.021576 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-config-data" (OuterVolumeSpecName: "config-data") pod "049748db-af29-4e1e-9666-17ad13383b6d" (UID: "049748db-af29-4e1e-9666-17ad13383b6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.092319 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049748db-af29-4e1e-9666-17ad13383b6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263770 4787 generic.go:334] "Generic (PLEG): container finished" podID="049748db-af29-4e1e-9666-17ad13383b6d" containerID="9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2" exitCode=0 Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263806 4787 generic.go:334] "Generic (PLEG): container finished" podID="049748db-af29-4e1e-9666-17ad13383b6d" containerID="6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a" exitCode=0 Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263827 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerDied","Data":"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2"} Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263854 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerDied","Data":"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a"} Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"049748db-af29-4e1e-9666-17ad13383b6d","Type":"ContainerDied","Data":"dd11945dd47fe916628481b39295dab1fbeb2c4e426c4c17642d697ec3888876"} Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.263935 4787 scope.go:117] "RemoveContainer" containerID="2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.308054 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.320735 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.344643 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.345028 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-notification-agent" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345045 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-notification-agent" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.345067 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-central-agent" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345073 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-central-agent" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.345088 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="proxy-httpd" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345097 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="proxy-httpd" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.345110 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="sg-core" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345118 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="sg-core" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345343 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-central-agent" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345366 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="ceilometer-notification-agent" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345376 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="sg-core" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.345395 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="049748db-af29-4e1e-9666-17ad13383b6d" containerName="proxy-httpd" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.352189 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.356542 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.356770 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.358843 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.405618 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.406146 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-log" containerID="cri-o://97cea5b5e73344eabfcedfb18888bbb658258dfb3869f5a9c449d9dc8c32ab52" gracePeriod=30 Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.406231 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-httpd" containerID="cri-o://8dfdbb53113f5cb34a10e2774c62b0304b580bd94401ed6e118d056c7f12c098" gracePeriod=30 Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503342 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-log-httpd\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503405 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssbn\" (UniqueName: \"kubernetes.io/projected/59059e11-6f07-4744-84e0-f180c579183b-kube-api-access-hssbn\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-config-data\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503451 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-run-httpd\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503502 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.503554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-scripts\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.600442 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049748db-af29-4e1e-9666-17ad13383b6d" path="/var/lib/kubelet/pods/049748db-af29-4e1e-9666-17ad13383b6d/volumes" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.605054 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-scripts\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.605454 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.605558 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-log-httpd\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.605663 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssbn\" (UniqueName: \"kubernetes.io/projected/59059e11-6f07-4744-84e0-f180c579183b-kube-api-access-hssbn\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.605777 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-config-data\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.605869 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-run-httpd\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.606002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.606401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-log-httpd\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.607550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-run-httpd\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.614106 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.614119 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-scripts\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.614295 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-config-data\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.614539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.629586 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssbn\" (UniqueName: \"kubernetes.io/projected/59059e11-6f07-4744-84e0-f180c579183b-kube-api-access-hssbn\") pod \"ceilometer-0\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.676963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.688631 4787 scope.go:117] "RemoveContainer" containerID="94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.736494 4787 scope.go:117] "RemoveContainer" containerID="9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.760922 4787 scope.go:117] "RemoveContainer" containerID="6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.782533 4787 scope.go:117] "RemoveContainer" containerID="2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.783080 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398\": container with ID starting with 2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398 not found: ID does not exist" containerID="2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.783124 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398"} err="failed to get container status \"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398\": rpc error: code = NotFound desc = could not find container \"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398\": container with ID starting with 2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398 not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.783183 4787 scope.go:117] "RemoveContainer" containerID="94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.783498 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361\": container with ID starting with 94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361 not found: ID does not exist" containerID="94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.783529 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361"} err="failed to get container status \"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361\": rpc error: code = NotFound desc = could not find container \"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361\": container with ID starting with 94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361 not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.783556 4787 scope.go:117] "RemoveContainer" containerID="9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.784479 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2\": container with ID starting with 9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2 not found: ID does not exist" containerID="9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.784502 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2"} err="failed to get container status \"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2\": rpc error: code = NotFound desc = could not find container \"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2\": container with ID starting with 9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2 not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.784516 4787 scope.go:117] "RemoveContainer" containerID="6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a" Jan 26 18:04:23 crc kubenswrapper[4787]: E0126 18:04:23.784816 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a\": container with ID starting with 6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a not found: ID does not exist" containerID="6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.784858 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a"} err="failed to get container status \"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a\": rpc error: code = NotFound desc = could not find container \"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a\": container with ID starting with 6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.784902 4787 scope.go:117] "RemoveContainer" containerID="2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.785415 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398"} err="failed to get container status \"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398\": rpc error: code = NotFound desc = could not find container \"2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398\": container with ID starting with 2131aeae0f12b6a997147b505e7c5dcc31a508a1dd5d1f5972f970d17b06c398 not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.785438 4787 scope.go:117] "RemoveContainer" containerID="94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.785722 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361"} err="failed to get container status \"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361\": rpc error: code = NotFound desc = could not find container \"94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361\": container with ID starting with 94645c216e7326998c4621e2269248d8afff9c5cea754b210970b1ce6329b361 not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.785742 4787 scope.go:117] "RemoveContainer" containerID="9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.786029 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2"} err="failed to get container status \"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2\": rpc error: code = NotFound desc = could not find container \"9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2\": container with ID starting with 9fed5fb9033730984496eed7d3125e1f5e30191ccfca402f7f8eedd49a005de2 not found: ID does not exist" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.786056 4787 scope.go:117] "RemoveContainer" containerID="6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a" Jan 26 18:04:23 crc kubenswrapper[4787]: I0126 18:04:23.786390 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a"} err="failed to get container status \"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a\": rpc error: code = NotFound desc = could not find container \"6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a\": container with ID starting with 6a56cf40b6c4ed19b766023bfaf85734fcc5bd300c3ce228375dd5bf3481074a not found: ID does not exist" Jan 26 18:04:24 crc kubenswrapper[4787]: I0126 18:04:24.275083 4787 generic.go:334] "Generic (PLEG): container finished" podID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerID="97cea5b5e73344eabfcedfb18888bbb658258dfb3869f5a9c449d9dc8c32ab52" exitCode=143 Jan 26 18:04:24 crc kubenswrapper[4787]: I0126 18:04:24.275375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0385f16-3fcd-47b1-8018-96960e1193bf","Type":"ContainerDied","Data":"97cea5b5e73344eabfcedfb18888bbb658258dfb3869f5a9c449d9dc8c32ab52"} Jan 26 18:04:24 crc kubenswrapper[4787]: I0126 18:04:24.322348 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:25 crc kubenswrapper[4787]: I0126 18:04:25.289027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerStarted","Data":"4a890584700de2325908c1d6c7298578493abaa931468d4fbaa4245575dabf24"} Jan 26 18:04:25 crc kubenswrapper[4787]: I0126 18:04:25.291874 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerID="ef6ddf453382e486a5a257bd406f89a0335e45350885d60d9c024a0b7b2ded47" exitCode=0 Jan 26 18:04:25 crc kubenswrapper[4787]: I0126 18:04:25.291906 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a8f5f31f-b089-4fff-a501-700527b53ae7","Type":"ContainerDied","Data":"ef6ddf453382e486a5a257bd406f89a0335e45350885d60d9c024a0b7b2ded47"} Jan 26 18:04:25 crc kubenswrapper[4787]: I0126 18:04:25.657895 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:26 crc kubenswrapper[4787]: I0126 18:04:26.090292 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": dial tcp 10.217.0.153:9292: connect: connection refused" Jan 26 18:04:26 crc kubenswrapper[4787]: I0126 18:04:26.090333 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": dial tcp 10.217.0.153:9292: connect: connection refused" Jan 26 18:04:27 crc kubenswrapper[4787]: I0126 18:04:27.316845 4787 generic.go:334] "Generic (PLEG): container finished" podID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerID="8dfdbb53113f5cb34a10e2774c62b0304b580bd94401ed6e118d056c7f12c098" exitCode=0 Jan 26 18:04:27 crc kubenswrapper[4787]: I0126 18:04:27.316901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0385f16-3fcd-47b1-8018-96960e1193bf","Type":"ContainerDied","Data":"8dfdbb53113f5cb34a10e2774c62b0304b580bd94401ed6e118d056c7f12c098"} Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.659094 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828091 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wclh6\" (UniqueName: \"kubernetes.io/projected/a0385f16-3fcd-47b1-8018-96960e1193bf-kube-api-access-wclh6\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828166 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-scripts\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828232 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-logs\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828282 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-httpd-run\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-combined-ca-bundle\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-config-data\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828370 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-public-tls-certs\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.828388 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a0385f16-3fcd-47b1-8018-96960e1193bf\" (UID: \"a0385f16-3fcd-47b1-8018-96960e1193bf\") " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.829701 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-logs" (OuterVolumeSpecName: "logs") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.829919 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.834151 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.834184 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-scripts" (OuterVolumeSpecName: "scripts") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.835467 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0385f16-3fcd-47b1-8018-96960e1193bf-kube-api-access-wclh6" (OuterVolumeSpecName: "kube-api-access-wclh6") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "kube-api-access-wclh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.866004 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.876221 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-config-data" (OuterVolumeSpecName: "config-data") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.887565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0385f16-3fcd-47b1-8018-96960e1193bf" (UID: "a0385f16-3fcd-47b1-8018-96960e1193bf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.930921 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wclh6\" (UniqueName: \"kubernetes.io/projected/a0385f16-3fcd-47b1-8018-96960e1193bf-kube-api-access-wclh6\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.930975 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.930985 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.930994 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.931003 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0385f16-3fcd-47b1-8018-96960e1193bf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.931012 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.931020 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0385f16-3fcd-47b1-8018-96960e1193bf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.931053 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 18:04:29 crc kubenswrapper[4787]: I0126 18:04:29.949577 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.032735 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.351784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0385f16-3fcd-47b1-8018-96960e1193bf","Type":"ContainerDied","Data":"9154f07288325ba452e235462cf30b386b08cc2603c4a7d914a0469f5e3d611c"} Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.351863 4787 scope.go:117] "RemoveContainer" containerID="8dfdbb53113f5cb34a10e2774c62b0304b580bd94401ed6e118d056c7f12c098" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.353018 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.353702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerStarted","Data":"ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53"} Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.357433 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a8f5f31f-b089-4fff-a501-700527b53ae7","Type":"ContainerDied","Data":"71ecb1933eff2eb34de4275c87bbef96af8952f89a9dc465100a97c9758c3e19"} Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.357475 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ecb1933eff2eb34de4275c87bbef96af8952f89a9dc465100a97c9758c3e19" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.360196 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qpg78" event={"ID":"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7","Type":"ContainerStarted","Data":"5c164f905683f615c38814aa87f5da141d7ee3e302815af5ab8d55ab06d27a3b"} Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.387327 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qpg78" podStartSLOduration=2.156136055 podStartE2EDuration="11.387307785s" podCreationTimestamp="2026-01-26 18:04:19 +0000 UTC" firstStartedPulling="2026-01-26 18:04:20.176898305 +0000 UTC m=+1228.884034438" lastFinishedPulling="2026-01-26 18:04:29.408070035 +0000 UTC m=+1238.115206168" observedRunningTime="2026-01-26 18:04:30.376248375 +0000 UTC m=+1239.083384528" watchObservedRunningTime="2026-01-26 18:04:30.387307785 +0000 UTC m=+1239.094443918" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.390983 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.401797 4787 scope.go:117] "RemoveContainer" containerID="97cea5b5e73344eabfcedfb18888bbb658258dfb3869f5a9c449d9dc8c32ab52" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.410574 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.447060 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456059 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:04:30 crc kubenswrapper[4787]: E0126 18:04:30.456449 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-log" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456459 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-log" Jan 26 18:04:30 crc kubenswrapper[4787]: E0126 18:04:30.456473 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-httpd" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456481 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-httpd" Jan 26 18:04:30 crc kubenswrapper[4787]: E0126 18:04:30.456504 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-httpd" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456511 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-httpd" Jan 26 18:04:30 crc kubenswrapper[4787]: E0126 18:04:30.456525 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-log" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456531 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-log" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456691 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-httpd" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456703 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-log" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456714 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" containerName="glance-httpd" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.456731 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" containerName="glance-log" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.457677 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.462030 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.471790 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.481152 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.541565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-httpd-run\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.541927 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-internal-tls-certs\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-logs\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542053 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npggs\" (UniqueName: \"kubernetes.io/projected/a8f5f31f-b089-4fff-a501-700527b53ae7-kube-api-access-npggs\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542100 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-config-data\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542106 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542196 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-combined-ca-bundle\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-scripts\") pod \"a8f5f31f-b089-4fff-a501-700527b53ae7\" (UID: \"a8f5f31f-b089-4fff-a501-700527b53ae7\") " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542423 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542458 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542476 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-logs" (OuterVolumeSpecName: "logs") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.542508 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.548912 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-scripts" (OuterVolumeSpecName: "scripts") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551293 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrzk\" (UniqueName: \"kubernetes.io/projected/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-kube-api-access-hjrzk\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551367 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551398 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-logs\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551679 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551701 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.551712 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f5f31f-b089-4fff-a501-700527b53ae7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.564486 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f5f31f-b089-4fff-a501-700527b53ae7-kube-api-access-npggs" (OuterVolumeSpecName: "kube-api-access-npggs") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "kube-api-access-npggs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.577312 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.593604 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.616116 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-config-data" (OuterVolumeSpecName: "config-data") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.623191 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a8f5f31f-b089-4fff-a501-700527b53ae7" (UID: "a8f5f31f-b089-4fff-a501-700527b53ae7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.652936 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrzk\" (UniqueName: \"kubernetes.io/projected/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-kube-api-access-hjrzk\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653115 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-logs\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653265 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653291 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653394 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653412 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npggs\" (UniqueName: \"kubernetes.io/projected/a8f5f31f-b089-4fff-a501-700527b53ae7-kube-api-access-npggs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653425 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653437 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.653449 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f5f31f-b089-4fff-a501-700527b53ae7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.654115 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.654559 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.655784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-logs\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.659777 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.673817 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrzk\" (UniqueName: \"kubernetes.io/projected/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-kube-api-access-hjrzk\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.675341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.675591 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.676873 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.696781 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " pod="openstack/glance-default-external-api-0" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.696843 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.755097 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:30 crc kubenswrapper[4787]: I0126 18:04:30.820020 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: W0126 18:04:31.360446 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd698097_04e8_4ad2_bc6e_fbdf16dfd12a.slice/crio-c6ca2913e07b39b8260513df9634fa8b073f3967c79fb69a1deb111145bc9697 WatchSource:0}: Error finding container c6ca2913e07b39b8260513df9634fa8b073f3967c79fb69a1deb111145bc9697: Status 404 returned error can't find the container with id c6ca2913e07b39b8260513df9634fa8b073f3967c79fb69a1deb111145bc9697 Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.362341 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.385686 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerStarted","Data":"34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1"} Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.386026 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.528396 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.566475 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.582002 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.584995 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.589324 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.589680 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.631554 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0385f16-3fcd-47b1-8018-96960e1193bf" path="/var/lib/kubelet/pods/a0385f16-3fcd-47b1-8018-96960e1193bf/volumes" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.632350 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f5f31f-b089-4fff-a501-700527b53ae7" path="/var/lib/kubelet/pods/a8f5f31f-b089-4fff-a501-700527b53ae7/volumes" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.632996 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.674871 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.674915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.674940 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.675224 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.675546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.675611 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-logs\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.675639 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzr4\" (UniqueName: \"kubernetes.io/projected/87c0c3c8-9282-45e5-b376-9c335e24573a-kube-api-access-kmzr4\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.675818 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777467 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777533 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-logs\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777564 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzr4\" (UniqueName: \"kubernetes.io/projected/87c0c3c8-9282-45e5-b376-9c335e24573a-kube-api-access-kmzr4\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777637 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777758 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.777786 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.778107 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-logs\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.778470 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.784124 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.785561 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.787931 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.788448 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.788721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.806273 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzr4\" (UniqueName: \"kubernetes.io/projected/87c0c3c8-9282-45e5-b376-9c335e24573a-kube-api-access-kmzr4\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.812223 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:04:31 crc kubenswrapper[4787]: I0126 18:04:31.930975 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:32 crc kubenswrapper[4787]: I0126 18:04:32.405851 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerStarted","Data":"75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614"} Jan 26 18:04:32 crc kubenswrapper[4787]: I0126 18:04:32.409105 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a","Type":"ContainerStarted","Data":"0bdd2c4e02e5b71c8136f8efddfe53e1a758f24f948ca99efec537c186ed5b0c"} Jan 26 18:04:32 crc kubenswrapper[4787]: I0126 18:04:32.409138 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a","Type":"ContainerStarted","Data":"c6ca2913e07b39b8260513df9634fa8b073f3967c79fb69a1deb111145bc9697"} Jan 26 18:04:32 crc kubenswrapper[4787]: I0126 18:04:32.457181 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.418306 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a","Type":"ContainerStarted","Data":"011bedc70e267a9340d9ea488ce83a6f2966f96cd9f36e47bd7028368ceb1135"} Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.423109 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerStarted","Data":"f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a"} Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.423295 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-central-agent" containerID="cri-o://ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53" gracePeriod=30 Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.423511 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.423560 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="proxy-httpd" containerID="cri-o://f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a" gracePeriod=30 Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.423612 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="sg-core" containerID="cri-o://75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614" gracePeriod=30 Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.423655 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-notification-agent" containerID="cri-o://34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1" gracePeriod=30 Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.440297 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"87c0c3c8-9282-45e5-b376-9c335e24573a","Type":"ContainerStarted","Data":"ae0907e003fb71214ee6b4a55004aff424e981221e4c8a3a5bf129205ee9d3ee"} Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.440339 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"87c0c3c8-9282-45e5-b376-9c335e24573a","Type":"ContainerStarted","Data":"74898663b7ab36600082e0191660aade4b8129774e844edf18695597665d2e10"} Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.449469 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.449445004 podStartE2EDuration="3.449445004s" podCreationTimestamp="2026-01-26 18:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:33.439280085 +0000 UTC m=+1242.146416218" watchObservedRunningTime="2026-01-26 18:04:33.449445004 +0000 UTC m=+1242.156581137" Jan 26 18:04:33 crc kubenswrapper[4787]: I0126 18:04:33.478324 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7037577019999999 podStartE2EDuration="10.478298989s" podCreationTimestamp="2026-01-26 18:04:23 +0000 UTC" firstStartedPulling="2026-01-26 18:04:24.327043782 +0000 UTC m=+1233.034179915" lastFinishedPulling="2026-01-26 18:04:33.101585069 +0000 UTC m=+1241.808721202" observedRunningTime="2026-01-26 18:04:33.467132866 +0000 UTC m=+1242.174268989" watchObservedRunningTime="2026-01-26 18:04:33.478298989 +0000 UTC m=+1242.185435122" Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.484129 4787 generic.go:334] "Generic (PLEG): container finished" podID="59059e11-6f07-4744-84e0-f180c579183b" containerID="f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a" exitCode=0 Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.484698 4787 generic.go:334] "Generic (PLEG): container finished" podID="59059e11-6f07-4744-84e0-f180c579183b" containerID="75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614" exitCode=2 Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.484711 4787 generic.go:334] "Generic (PLEG): container finished" podID="59059e11-6f07-4744-84e0-f180c579183b" containerID="34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1" exitCode=0 Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.484164 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerDied","Data":"f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a"} Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.484800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerDied","Data":"75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614"} Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.484816 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerDied","Data":"34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1"} Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.486716 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"87c0c3c8-9282-45e5-b376-9c335e24573a","Type":"ContainerStarted","Data":"ef0bc42b7bd2dab6b5fbabd59a7d254961bab025f7081c2b30ff94990add57bc"} Jan 26 18:04:34 crc kubenswrapper[4787]: I0126 18:04:34.513820 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.513794944 podStartE2EDuration="3.513794944s" podCreationTimestamp="2026-01-26 18:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:34.50791277 +0000 UTC m=+1243.215048923" watchObservedRunningTime="2026-01-26 18:04:34.513794944 +0000 UTC m=+1243.220931077" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.286133 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401049 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-log-httpd\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401149 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-sg-core-conf-yaml\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401202 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-run-httpd\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401311 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-combined-ca-bundle\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hssbn\" (UniqueName: \"kubernetes.io/projected/59059e11-6f07-4744-84e0-f180c579183b-kube-api-access-hssbn\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401641 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.401697 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.402128 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-config-data\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.402185 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-scripts\") pod \"59059e11-6f07-4744-84e0-f180c579183b\" (UID: \"59059e11-6f07-4744-84e0-f180c579183b\") " Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.402617 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.402641 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59059e11-6f07-4744-84e0-f180c579183b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.413299 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59059e11-6f07-4744-84e0-f180c579183b-kube-api-access-hssbn" (OuterVolumeSpecName: "kube-api-access-hssbn") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "kube-api-access-hssbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.415149 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-scripts" (OuterVolumeSpecName: "scripts") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.431534 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.488883 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.501637 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-config-data" (OuterVolumeSpecName: "config-data") pod "59059e11-6f07-4744-84e0-f180c579183b" (UID: "59059e11-6f07-4744-84e0-f180c579183b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.504777 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.504814 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.504826 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hssbn\" (UniqueName: \"kubernetes.io/projected/59059e11-6f07-4744-84e0-f180c579183b-kube-api-access-hssbn\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.504837 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.504847 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59059e11-6f07-4744-84e0-f180c579183b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.527504 4787 generic.go:334] "Generic (PLEG): container finished" podID="59059e11-6f07-4744-84e0-f180c579183b" containerID="ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53" exitCode=0 Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.527557 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.527561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerDied","Data":"ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53"} Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.527598 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59059e11-6f07-4744-84e0-f180c579183b","Type":"ContainerDied","Data":"4a890584700de2325908c1d6c7298578493abaa931468d4fbaa4245575dabf24"} Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.527619 4787 scope.go:117] "RemoveContainer" containerID="f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.558167 4787 scope.go:117] "RemoveContainer" containerID="75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.565778 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.572981 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.584184 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.584633 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-notification-agent" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.584659 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-notification-agent" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.584681 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="sg-core" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.584689 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="sg-core" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.584709 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="proxy-httpd" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.584716 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="proxy-httpd" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.584741 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-central-agent" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.584749 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-central-agent" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.584977 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="proxy-httpd" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.585005 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="sg-core" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.585019 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-central-agent" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.585036 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59059e11-6f07-4744-84e0-f180c579183b" containerName="ceilometer-notification-agent" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.587015 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.589099 4787 scope.go:117] "RemoveContainer" containerID="34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.595691 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.596115 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.632837 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.637638 4787 scope.go:117] "RemoveContainer" containerID="ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.662660 4787 scope.go:117] "RemoveContainer" containerID="f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.663251 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a\": container with ID starting with f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a not found: ID does not exist" containerID="f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.663311 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a"} err="failed to get container status \"f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a\": rpc error: code = NotFound desc = could not find container \"f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a\": container with ID starting with f93040dff4c805c539084ae5f747e8bcd58abad7b756b1fa18b48bc64ae5667a not found: ID does not exist" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.663388 4787 scope.go:117] "RemoveContainer" containerID="75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.664023 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614\": container with ID starting with 75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614 not found: ID does not exist" containerID="75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.664075 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614"} err="failed to get container status \"75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614\": rpc error: code = NotFound desc = could not find container \"75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614\": container with ID starting with 75d82b4ca7ba8b1f688aa45b555e2e23971b7dafff8c8348a6cfe1a5e3d62614 not found: ID does not exist" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.664103 4787 scope.go:117] "RemoveContainer" containerID="34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.664427 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1\": container with ID starting with 34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1 not found: ID does not exist" containerID="34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.664455 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1"} err="failed to get container status \"34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1\": rpc error: code = NotFound desc = could not find container \"34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1\": container with ID starting with 34b34d63ff20ab52b884c88eeec446e9424266ac820316d3bcdffc9ea6dd20c1 not found: ID does not exist" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.664471 4787 scope.go:117] "RemoveContainer" containerID="ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53" Jan 26 18:04:38 crc kubenswrapper[4787]: E0126 18:04:38.664838 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53\": container with ID starting with ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53 not found: ID does not exist" containerID="ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.664864 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53"} err="failed to get container status \"ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53\": rpc error: code = NotFound desc = could not find container \"ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53\": container with ID starting with ac9af72fe90f6b81450f4f7fd8344d4866ebdef031ab61f0996e4e3371439a53 not found: ID does not exist" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-config-data\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715263 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tft\" (UniqueName: \"kubernetes.io/projected/20e74c27-77d5-499e-9b77-62b68816df9f-kube-api-access-n2tft\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-run-httpd\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715376 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-log-httpd\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715414 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.715593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-scripts\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817106 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-config-data\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817175 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817201 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tft\" (UniqueName: \"kubernetes.io/projected/20e74c27-77d5-499e-9b77-62b68816df9f-kube-api-access-n2tft\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-run-httpd\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817658 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-log-httpd\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-run-httpd\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.817853 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-scripts\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.818407 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-log-httpd\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.821022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.821780 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.822815 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-scripts\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.822834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-config-data\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.847849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2tft\" (UniqueName: \"kubernetes.io/projected/20e74c27-77d5-499e-9b77-62b68816df9f-kube-api-access-n2tft\") pod \"ceilometer-0\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " pod="openstack/ceilometer-0" Jan 26 18:04:38 crc kubenswrapper[4787]: I0126 18:04:38.929579 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:04:39 crc kubenswrapper[4787]: W0126 18:04:39.376131 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20e74c27_77d5_499e_9b77_62b68816df9f.slice/crio-b394a62e27f97b658ae6a1a527c8699a336e39313b1d57a1140963d42ba92526 WatchSource:0}: Error finding container b394a62e27f97b658ae6a1a527c8699a336e39313b1d57a1140963d42ba92526: Status 404 returned error can't find the container with id b394a62e27f97b658ae6a1a527c8699a336e39313b1d57a1140963d42ba92526 Jan 26 18:04:39 crc kubenswrapper[4787]: I0126 18:04:39.377944 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:04:39 crc kubenswrapper[4787]: I0126 18:04:39.539172 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerStarted","Data":"b394a62e27f97b658ae6a1a527c8699a336e39313b1d57a1140963d42ba92526"} Jan 26 18:04:39 crc kubenswrapper[4787]: I0126 18:04:39.605860 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59059e11-6f07-4744-84e0-f180c579183b" path="/var/lib/kubelet/pods/59059e11-6f07-4744-84e0-f180c579183b/volumes" Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.551627 4787 generic.go:334] "Generic (PLEG): container finished" podID="2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" containerID="5c164f905683f615c38814aa87f5da141d7ee3e302815af5ab8d55ab06d27a3b" exitCode=0 Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.551698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qpg78" event={"ID":"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7","Type":"ContainerDied","Data":"5c164f905683f615c38814aa87f5da141d7ee3e302815af5ab8d55ab06d27a3b"} Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.554907 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerStarted","Data":"430d539df16f56714d22efd8e90b3c1890f372416d685fefd30069e93e6e7643"} Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.820922 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.821043 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.858503 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:04:40 crc kubenswrapper[4787]: I0126 18:04:40.874147 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.565732 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerStarted","Data":"fecca0cc548e59b341664307e9f68924ab9a33c465b0b433903f91d827876469"} Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.565790 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerStarted","Data":"bcd0be3eeb093ce43ce9786f0e22909ddde3933b41c4d54a2c494578c8e688b3"} Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.566233 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.566278 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.931647 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.931928 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:41 crc kubenswrapper[4787]: I0126 18:04:41.949775 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.030085 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.035556 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.081631 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-scripts\") pod \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.081698 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-combined-ca-bundle\") pod \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.081754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-config-data\") pod \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.081978 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzx55\" (UniqueName: \"kubernetes.io/projected/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-kube-api-access-xzx55\") pod \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\" (UID: \"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7\") " Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.091693 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-scripts" (OuterVolumeSpecName: "scripts") pod "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" (UID: "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.106200 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-kube-api-access-xzx55" (OuterVolumeSpecName: "kube-api-access-xzx55") pod "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" (UID: "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7"). InnerVolumeSpecName "kube-api-access-xzx55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.118941 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-config-data" (OuterVolumeSpecName: "config-data") pod "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" (UID: "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.121800 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" (UID: "2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.184631 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzx55\" (UniqueName: \"kubernetes.io/projected/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-kube-api-access-xzx55\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.184674 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.184685 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.184693 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.585427 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qpg78" event={"ID":"2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7","Type":"ContainerDied","Data":"a9eae027ea78e74b376a0798a9bfc04ed46f639cf78c049a5ed982b1284527f8"} Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.585471 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9eae027ea78e74b376a0798a9bfc04ed46f639cf78c049a5ed982b1284527f8" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.585664 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qpg78" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.585816 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.585839 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.671549 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:04:42 crc kubenswrapper[4787]: E0126 18:04:42.672042 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" containerName="nova-cell0-conductor-db-sync" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.672067 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" containerName="nova-cell0-conductor-db-sync" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.672326 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" containerName="nova-cell0-conductor-db-sync" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.673111 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.677932 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-95sz6" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.678266 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.687555 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.806454 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szh75\" (UniqueName: \"kubernetes.io/projected/41cca919-781a-48fb-99c1-ec7ebbb7c601-kube-api-access-szh75\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.807142 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.807341 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.908982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.909140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szh75\" (UniqueName: \"kubernetes.io/projected/41cca919-781a-48fb-99c1-ec7ebbb7c601-kube-api-access-szh75\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.909186 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.912881 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.913579 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.926479 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szh75\" (UniqueName: \"kubernetes.io/projected/41cca919-781a-48fb-99c1-ec7ebbb7c601-kube-api-access-szh75\") pod \"nova-cell0-conductor-0\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:42 crc kubenswrapper[4787]: I0126 18:04:42.995146 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.459974 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.607815 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerStarted","Data":"bc6ad5472f4b52a21d777483fb774b5bdc0e32251b09e852f7a61df7009015a5"} Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.608144 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.610859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"41cca919-781a-48fb-99c1-ec7ebbb7c601","Type":"ContainerStarted","Data":"c63cd4f6718b8b3c9bb4e507576f786b503e50beb8b8bfa0c889831adf378f29"} Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.634598 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.31642249 podStartE2EDuration="5.634582606s" podCreationTimestamp="2026-01-26 18:04:38 +0000 UTC" firstStartedPulling="2026-01-26 18:04:39.378750596 +0000 UTC m=+1248.085886749" lastFinishedPulling="2026-01-26 18:04:42.696910722 +0000 UTC m=+1251.404046865" observedRunningTime="2026-01-26 18:04:43.628839595 +0000 UTC m=+1252.335975728" watchObservedRunningTime="2026-01-26 18:04:43.634582606 +0000 UTC m=+1252.341718729" Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.660315 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.660435 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:04:43 crc kubenswrapper[4787]: I0126 18:04:43.710275 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:04:44 crc kubenswrapper[4787]: I0126 18:04:44.622823 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"41cca919-781a-48fb-99c1-ec7ebbb7c601","Type":"ContainerStarted","Data":"dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5"} Jan 26 18:04:44 crc kubenswrapper[4787]: I0126 18:04:44.623243 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:44 crc kubenswrapper[4787]: I0126 18:04:44.641612 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.641595503 podStartE2EDuration="2.641595503s" podCreationTimestamp="2026-01-26 18:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:44.639205846 +0000 UTC m=+1253.346341979" watchObservedRunningTime="2026-01-26 18:04:44.641595503 +0000 UTC m=+1253.348731626" Jan 26 18:04:44 crc kubenswrapper[4787]: I0126 18:04:44.980770 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:44 crc kubenswrapper[4787]: I0126 18:04:44.980899 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:04:44 crc kubenswrapper[4787]: I0126 18:04:44.982150 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:04:46 crc kubenswrapper[4787]: I0126 18:04:46.807475 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:04:46 crc kubenswrapper[4787]: I0126 18:04:46.807795 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:04:46 crc kubenswrapper[4787]: I0126 18:04:46.807845 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:04:46 crc kubenswrapper[4787]: I0126 18:04:46.808605 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b92bcf71cd03a611c9ec00077b120f3d83120698e0ca3da40d94cf74a7cfe86"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:04:46 crc kubenswrapper[4787]: I0126 18:04:46.808671 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://4b92bcf71cd03a611c9ec00077b120f3d83120698e0ca3da40d94cf74a7cfe86" gracePeriod=600 Jan 26 18:04:47 crc kubenswrapper[4787]: I0126 18:04:47.650544 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="4b92bcf71cd03a611c9ec00077b120f3d83120698e0ca3da40d94cf74a7cfe86" exitCode=0 Jan 26 18:04:47 crc kubenswrapper[4787]: I0126 18:04:47.650693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"4b92bcf71cd03a611c9ec00077b120f3d83120698e0ca3da40d94cf74a7cfe86"} Jan 26 18:04:47 crc kubenswrapper[4787]: I0126 18:04:47.650799 4787 scope.go:117] "RemoveContainer" containerID="72456f5ee53807e4dd44f118cd885938e3fabd30b979df1c99e1042ba20d5aff" Jan 26 18:04:48 crc kubenswrapper[4787]: I0126 18:04:48.665569 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"113c45a679e7cee59810e8e7b032bbf95c23e0a2fbc209f74960d3bd68199f7f"} Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.025916 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.468589 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pb7c9"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.469829 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.473467 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.473728 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.483020 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pb7c9"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.499461 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswr7\" (UniqueName: \"kubernetes.io/projected/db02f653-5216-4701-9ab6-3cf3e9352d87-kube-api-access-fswr7\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.499546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-config-data\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.499616 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-scripts\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.499647 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.603448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-scripts\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.603518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.603640 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswr7\" (UniqueName: \"kubernetes.io/projected/db02f653-5216-4701-9ab6-3cf3e9352d87-kube-api-access-fswr7\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.603696 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-config-data\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.619856 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-config-data\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.631503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-scripts\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.635012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.664113 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswr7\" (UniqueName: \"kubernetes.io/projected/db02f653-5216-4701-9ab6-3cf3e9352d87-kube-api-access-fswr7\") pod \"nova-cell0-cell-mapping-pb7c9\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.678325 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.680160 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.693024 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.699285 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.764775 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.766144 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.771484 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.777793 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.806004 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.807090 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074b52bf-dea3-41cc-8d16-9f979d1536d0-logs\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.807123 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xfs\" (UniqueName: \"kubernetes.io/projected/074b52bf-dea3-41cc-8d16-9f979d1536d0-kube-api-access-55xfs\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.807214 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-config-data\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.807234 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.904119 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.905636 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvbh\" (UniqueName: \"kubernetes.io/projected/c4fe2236-fb3c-42c4-b91f-ccf847666b71-kube-api-access-6vvbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908372 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-config-data\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908439 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908474 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074b52bf-dea3-41cc-8d16-9f979d1536d0-logs\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.908520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xfs\" (UniqueName: \"kubernetes.io/projected/074b52bf-dea3-41cc-8d16-9f979d1536d0-kube-api-access-55xfs\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.911346 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074b52bf-dea3-41cc-8d16-9f979d1536d0-logs\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.912989 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.918817 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.920983 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-config-data\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.979300 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:04:53 crc kubenswrapper[4787]: I0126 18:04:53.991906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xfs\" (UniqueName: \"kubernetes.io/projected/074b52bf-dea3-41cc-8d16-9f979d1536d0-kube-api-access-55xfs\") pod \"nova-api-0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " pod="openstack/nova-api-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.011869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.012246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.012395 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.012507 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8a5382-f4c9-4993-ae76-8fde9df100a4-logs\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.012785 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvbh\" (UniqueName: \"kubernetes.io/projected/c4fe2236-fb3c-42c4-b91f-ccf847666b71-kube-api-access-6vvbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.013060 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-config-data\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.013196 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5px5t\" (UniqueName: \"kubernetes.io/projected/6b8a5382-f4c9-4993-ae76-8fde9df100a4-kube-api-access-5px5t\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.026318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.036906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.046912 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvbh\" (UniqueName: \"kubernetes.io/projected/c4fe2236-fb3c-42c4-b91f-ccf847666b71-kube-api-access-6vvbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.050676 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.053965 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.059880 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.083633 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.096876 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.100367 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.116489 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-z45bz"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.119447 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.119522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8a5382-f4c9-4993-ae76-8fde9df100a4-logs\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.119625 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-config-data\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.119664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5px5t\" (UniqueName: \"kubernetes.io/projected/6b8a5382-f4c9-4993-ae76-8fde9df100a4-kube-api-access-5px5t\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.119936 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.120641 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8a5382-f4c9-4993-ae76-8fde9df100a4-logs\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.124295 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.130048 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-z45bz"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.131080 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-config-data\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.139323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5px5t\" (UniqueName: \"kubernetes.io/projected/6b8a5382-f4c9-4993-ae76-8fde9df100a4-kube-api-access-5px5t\") pod \"nova-metadata-0\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221240 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221317 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221343 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-config-data\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221424 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-config\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221451 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221473 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndp6m\" (UniqueName: \"kubernetes.io/projected/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-kube-api-access-ndp6m\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221507 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.221527 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllw2\" (UniqueName: \"kubernetes.io/projected/8e9c8669-3692-4400-9699-a7892393fb7c-kube-api-access-kllw2\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.323997 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324055 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-config-data\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324135 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-config\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndp6m\" (UniqueName: \"kubernetes.io/projected/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-kube-api-access-ndp6m\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324224 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllw2\" (UniqueName: \"kubernetes.io/projected/8e9c8669-3692-4400-9699-a7892393fb7c-kube-api-access-kllw2\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.324277 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.325204 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.325276 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.325771 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.325797 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-config\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.326112 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.329021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.347673 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-config-data\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.351765 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.357575 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllw2\" (UniqueName: \"kubernetes.io/projected/8e9c8669-3692-4400-9699-a7892393fb7c-kube-api-access-kllw2\") pod \"dnsmasq-dns-647df7b8c5-z45bz\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.357736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndp6m\" (UniqueName: \"kubernetes.io/projected/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-kube-api-access-ndp6m\") pod \"nova-scheduler-0\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.429963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.454221 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.602780 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pb7c9"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.665330 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:04:54 crc kubenswrapper[4787]: W0126 18:04:54.671135 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod074b52bf_dea3_41cc_8d16_9f979d1536d0.slice/crio-e65fc8a4ea6bb611c4b5bc07f818b22974e3a3818a9598785175eb092b86dfb8 WatchSource:0}: Error finding container e65fc8a4ea6bb611c4b5bc07f818b22974e3a3818a9598785175eb092b86dfb8: Status 404 returned error can't find the container with id e65fc8a4ea6bb611c4b5bc07f818b22974e3a3818a9598785175eb092b86dfb8 Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.761858 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"074b52bf-dea3-41cc-8d16-9f979d1536d0","Type":"ContainerStarted","Data":"e65fc8a4ea6bb611c4b5bc07f818b22974e3a3818a9598785175eb092b86dfb8"} Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.768724 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.769100 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pb7c9" event={"ID":"db02f653-5216-4701-9ab6-3cf3e9352d87","Type":"ContainerStarted","Data":"b765338f3df72aba330c366fe5a9cf949e137ab381a63dd43218889bc78f4367"} Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.793553 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8fcbn"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.794843 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.797970 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.798192 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.823481 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8fcbn"] Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.946147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-config-data\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.946527 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6h6x\" (UniqueName: \"kubernetes.io/projected/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-kube-api-access-l6h6x\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.946666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-scripts\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.946700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:54 crc kubenswrapper[4787]: I0126 18:04:54.958886 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.049772 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-config-data\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.049850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6h6x\" (UniqueName: \"kubernetes.io/projected/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-kube-api-access-l6h6x\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.049975 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-scripts\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.049999 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.056385 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-scripts\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.058772 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.058921 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-config-data\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.067529 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6h6x\" (UniqueName: \"kubernetes.io/projected/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-kube-api-access-l6h6x\") pod \"nova-cell1-conductor-db-sync-8fcbn\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.120419 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-z45bz"] Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.166392 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.410410 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.650226 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8fcbn"] Jan 26 18:04:55 crc kubenswrapper[4787]: W0126 18:04:55.673181 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb74bd67_97b1_4f96_9f5d_6ebc4b14ab4f.slice/crio-f9cd67884bd11af0c0821d4e756ed23fd2ef08c40976aac822341ccb28fd8e6e WatchSource:0}: Error finding container f9cd67884bd11af0c0821d4e756ed23fd2ef08c40976aac822341ccb28fd8e6e: Status 404 returned error can't find the container with id f9cd67884bd11af0c0821d4e756ed23fd2ef08c40976aac822341ccb28fd8e6e Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.798730 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" event={"ID":"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f","Type":"ContainerStarted","Data":"f9cd67884bd11af0c0821d4e756ed23fd2ef08c40976aac822341ccb28fd8e6e"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.800493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c67738ab-5ab4-4b96-87f1-b1238e0df8fb","Type":"ContainerStarted","Data":"eb1c50bec0a7ea902c4709893df52022b4045b6d3d7895d6d181345f1150415a"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.811254 4787 generic.go:334] "Generic (PLEG): container finished" podID="8e9c8669-3692-4400-9699-a7892393fb7c" containerID="9f6cb8c7c9c52bc527b6db3c961bea5b572e2bcd4d6880f1237fe1e4e5a8bcf0" exitCode=0 Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.811608 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" event={"ID":"8e9c8669-3692-4400-9699-a7892393fb7c","Type":"ContainerDied","Data":"9f6cb8c7c9c52bc527b6db3c961bea5b572e2bcd4d6880f1237fe1e4e5a8bcf0"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.811642 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" event={"ID":"8e9c8669-3692-4400-9699-a7892393fb7c","Type":"ContainerStarted","Data":"4f570b5f1434c2d8ee93a7e536dc7e3187a799e2e02cbf7ea4948f66e4a0d044"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.819506 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8a5382-f4c9-4993-ae76-8fde9df100a4","Type":"ContainerStarted","Data":"c11624573f65da712cb8ae2cc509df131024edc89594361329aae0f55a91756e"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.826116 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4fe2236-fb3c-42c4-b91f-ccf847666b71","Type":"ContainerStarted","Data":"08efaf9603d4fa11e7d71f55b2f111992d88b50f7e0f7795ed4738fcb38cf928"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.849681 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pb7c9" event={"ID":"db02f653-5216-4701-9ab6-3cf3e9352d87","Type":"ContainerStarted","Data":"5d3bd6542ee023476beb041680c535d96acb21ed4b494b28f57144c146974554"} Jan 26 18:04:55 crc kubenswrapper[4787]: I0126 18:04:55.885310 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pb7c9" podStartSLOduration=2.8852883240000002 podStartE2EDuration="2.885288324s" podCreationTimestamp="2026-01-26 18:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:55.864361953 +0000 UTC m=+1264.571498086" watchObservedRunningTime="2026-01-26 18:04:55.885288324 +0000 UTC m=+1264.592424457" Jan 26 18:04:56 crc kubenswrapper[4787]: I0126 18:04:56.861695 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" event={"ID":"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f","Type":"ContainerStarted","Data":"090e2277c794b4b34d898ea6fc2742fd889317e021791fd07b23fecfee606475"} Jan 26 18:04:56 crc kubenswrapper[4787]: I0126 18:04:56.867318 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" event={"ID":"8e9c8669-3692-4400-9699-a7892393fb7c","Type":"ContainerStarted","Data":"c2a537cb6f61c3da89db2a989c5e41821717495c5cd63da19464b828f6e8ec9c"} Jan 26 18:04:56 crc kubenswrapper[4787]: I0126 18:04:56.867367 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:04:56 crc kubenswrapper[4787]: I0126 18:04:56.882770 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" podStartSLOduration=2.882751959 podStartE2EDuration="2.882751959s" podCreationTimestamp="2026-01-26 18:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:56.881761634 +0000 UTC m=+1265.588897767" watchObservedRunningTime="2026-01-26 18:04:56.882751959 +0000 UTC m=+1265.589888092" Jan 26 18:04:56 crc kubenswrapper[4787]: I0126 18:04:56.902980 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" podStartSLOduration=3.902961903 podStartE2EDuration="3.902961903s" podCreationTimestamp="2026-01-26 18:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:04:56.900284678 +0000 UTC m=+1265.607420831" watchObservedRunningTime="2026-01-26 18:04:56.902961903 +0000 UTC m=+1265.610098036" Jan 26 18:04:57 crc kubenswrapper[4787]: I0126 18:04:57.339408 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:04:57 crc kubenswrapper[4787]: I0126 18:04:57.350718 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.895218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8a5382-f4c9-4993-ae76-8fde9df100a4","Type":"ContainerStarted","Data":"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6"} Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.896972 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4fe2236-fb3c-42c4-b91f-ccf847666b71","Type":"ContainerStarted","Data":"a944c399e051ab43c2f49a7954480b84c4ad98a2cd0b528ec30a787b142bcdfc"} Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.897096 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c4fe2236-fb3c-42c4-b91f-ccf847666b71" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a944c399e051ab43c2f49a7954480b84c4ad98a2cd0b528ec30a787b142bcdfc" gracePeriod=30 Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.900520 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"074b52bf-dea3-41cc-8d16-9f979d1536d0","Type":"ContainerStarted","Data":"b238443b9f5decbd325576ee239422b7e86e34f9019a74fb98b37f687d737e2f"} Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.901928 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c67738ab-5ab4-4b96-87f1-b1238e0df8fb","Type":"ContainerStarted","Data":"bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5"} Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.922681 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5610040659999997 podStartE2EDuration="6.922660604s" podCreationTimestamp="2026-01-26 18:04:53 +0000 UTC" firstStartedPulling="2026-01-26 18:04:54.803641501 +0000 UTC m=+1263.510777634" lastFinishedPulling="2026-01-26 18:04:59.165298039 +0000 UTC m=+1267.872434172" observedRunningTime="2026-01-26 18:04:59.916017133 +0000 UTC m=+1268.623153256" watchObservedRunningTime="2026-01-26 18:04:59.922660604 +0000 UTC m=+1268.629796737" Jan 26 18:04:59 crc kubenswrapper[4787]: I0126 18:04:59.940548 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.193042887 podStartE2EDuration="6.940531181s" podCreationTimestamp="2026-01-26 18:04:53 +0000 UTC" firstStartedPulling="2026-01-26 18:04:55.418185425 +0000 UTC m=+1264.125321578" lastFinishedPulling="2026-01-26 18:04:59.165673739 +0000 UTC m=+1267.872809872" observedRunningTime="2026-01-26 18:04:59.931845019 +0000 UTC m=+1268.638981152" watchObservedRunningTime="2026-01-26 18:04:59.940531181 +0000 UTC m=+1268.647667314" Jan 26 18:05:00 crc kubenswrapper[4787]: I0126 18:05:00.927382 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8a5382-f4c9-4993-ae76-8fde9df100a4","Type":"ContainerStarted","Data":"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3"} Jan 26 18:05:00 crc kubenswrapper[4787]: I0126 18:05:00.928014 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-log" containerID="cri-o://cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6" gracePeriod=30 Jan 26 18:05:00 crc kubenswrapper[4787]: I0126 18:05:00.928485 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-metadata" containerID="cri-o://309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3" gracePeriod=30 Jan 26 18:05:00 crc kubenswrapper[4787]: I0126 18:05:00.935990 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"074b52bf-dea3-41cc-8d16-9f979d1536d0","Type":"ContainerStarted","Data":"6f2d9139b484d7beab4cdf3f58277ed237ea337244111a6cb45dc212687c343e"} Jan 26 18:05:00 crc kubenswrapper[4787]: I0126 18:05:00.957085 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.7494410609999997 podStartE2EDuration="7.957062393s" podCreationTimestamp="2026-01-26 18:04:53 +0000 UTC" firstStartedPulling="2026-01-26 18:04:54.957887932 +0000 UTC m=+1263.665024075" lastFinishedPulling="2026-01-26 18:04:59.165509274 +0000 UTC m=+1267.872645407" observedRunningTime="2026-01-26 18:05:00.94998834 +0000 UTC m=+1269.657124493" watchObservedRunningTime="2026-01-26 18:05:00.957062393 +0000 UTC m=+1269.664198526" Jan 26 18:05:00 crc kubenswrapper[4787]: I0126 18:05:00.979068 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.471604428 podStartE2EDuration="7.97905174s" podCreationTimestamp="2026-01-26 18:04:53 +0000 UTC" firstStartedPulling="2026-01-26 18:04:54.674845523 +0000 UTC m=+1263.381981656" lastFinishedPulling="2026-01-26 18:04:59.182292845 +0000 UTC m=+1267.889428968" observedRunningTime="2026-01-26 18:05:00.972246704 +0000 UTC m=+1269.679382847" watchObservedRunningTime="2026-01-26 18:05:00.97905174 +0000 UTC m=+1269.686187873" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.530203 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.684616 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-config-data\") pod \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.684677 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-combined-ca-bundle\") pod \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.684730 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8a5382-f4c9-4993-ae76-8fde9df100a4-logs\") pod \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.684769 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5px5t\" (UniqueName: \"kubernetes.io/projected/6b8a5382-f4c9-4993-ae76-8fde9df100a4-kube-api-access-5px5t\") pod \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\" (UID: \"6b8a5382-f4c9-4993-ae76-8fde9df100a4\") " Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.685813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a5382-f4c9-4993-ae76-8fde9df100a4-logs" (OuterVolumeSpecName: "logs") pod "6b8a5382-f4c9-4993-ae76-8fde9df100a4" (UID: "6b8a5382-f4c9-4993-ae76-8fde9df100a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.690554 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8a5382-f4c9-4993-ae76-8fde9df100a4-kube-api-access-5px5t" (OuterVolumeSpecName: "kube-api-access-5px5t") pod "6b8a5382-f4c9-4993-ae76-8fde9df100a4" (UID: "6b8a5382-f4c9-4993-ae76-8fde9df100a4"). InnerVolumeSpecName "kube-api-access-5px5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.720875 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b8a5382-f4c9-4993-ae76-8fde9df100a4" (UID: "6b8a5382-f4c9-4993-ae76-8fde9df100a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.723821 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-config-data" (OuterVolumeSpecName: "config-data") pod "6b8a5382-f4c9-4993-ae76-8fde9df100a4" (UID: "6b8a5382-f4c9-4993-ae76-8fde9df100a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.786729 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5px5t\" (UniqueName: \"kubernetes.io/projected/6b8a5382-f4c9-4993-ae76-8fde9df100a4-kube-api-access-5px5t\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.786767 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.786777 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a5382-f4c9-4993-ae76-8fde9df100a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.786785 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8a5382-f4c9-4993-ae76-8fde9df100a4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.950001 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.949915 4787 generic.go:334] "Generic (PLEG): container finished" podID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerID="309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3" exitCode=0 Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.950116 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8a5382-f4c9-4993-ae76-8fde9df100a4","Type":"ContainerDied","Data":"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3"} Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.950160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8a5382-f4c9-4993-ae76-8fde9df100a4","Type":"ContainerDied","Data":"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6"} Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.950189 4787 scope.go:117] "RemoveContainer" containerID="309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3" Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.950058 4787 generic.go:334] "Generic (PLEG): container finished" podID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerID="cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6" exitCode=143 Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.950382 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8a5382-f4c9-4993-ae76-8fde9df100a4","Type":"ContainerDied","Data":"c11624573f65da712cb8ae2cc509df131024edc89594361329aae0f55a91756e"} Jan 26 18:05:01 crc kubenswrapper[4787]: I0126 18:05:01.991990 4787 scope.go:117] "RemoveContainer" containerID="cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.022833 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.056115 4787 scope.go:117] "RemoveContainer" containerID="309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3" Jan 26 18:05:02 crc kubenswrapper[4787]: E0126 18:05:02.056513 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3\": container with ID starting with 309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3 not found: ID does not exist" containerID="309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.056544 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3"} err="failed to get container status \"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3\": rpc error: code = NotFound desc = could not find container \"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3\": container with ID starting with 309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3 not found: ID does not exist" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.056562 4787 scope.go:117] "RemoveContainer" containerID="cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6" Jan 26 18:05:02 crc kubenswrapper[4787]: E0126 18:05:02.056799 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6\": container with ID starting with cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6 not found: ID does not exist" containerID="cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.056818 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6"} err="failed to get container status \"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6\": rpc error: code = NotFound desc = could not find container \"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6\": container with ID starting with cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6 not found: ID does not exist" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.056852 4787 scope.go:117] "RemoveContainer" containerID="309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.057107 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3"} err="failed to get container status \"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3\": rpc error: code = NotFound desc = could not find container \"309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3\": container with ID starting with 309d8873347721873ee1b294f10af35a06585806a252ff91c8f21452c7af5ea3 not found: ID does not exist" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.057124 4787 scope.go:117] "RemoveContainer" containerID="cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.057357 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6"} err="failed to get container status \"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6\": rpc error: code = NotFound desc = could not find container \"cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6\": container with ID starting with cce5aba21ace2a28f3ee111b694d3dee2bf363e9908b52592e4819b50f5f11a6 not found: ID does not exist" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.068905 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.084188 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:02 crc kubenswrapper[4787]: E0126 18:05:02.084557 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-metadata" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.084570 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-metadata" Jan 26 18:05:02 crc kubenswrapper[4787]: E0126 18:05:02.084601 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-log" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.084608 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-log" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.084757 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-log" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.084780 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" containerName="nova-metadata-metadata" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.085699 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.087982 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.088479 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.099908 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.199084 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-logs\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.199210 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.199253 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qjpn\" (UniqueName: \"kubernetes.io/projected/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-kube-api-access-2qjpn\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.199326 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.199351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-config-data\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.301820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.301873 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-config-data\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.302003 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-logs\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.302463 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-logs\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.302527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.302562 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qjpn\" (UniqueName: \"kubernetes.io/projected/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-kube-api-access-2qjpn\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.309360 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.316456 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-config-data\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.326345 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qjpn\" (UniqueName: \"kubernetes.io/projected/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-kube-api-access-2qjpn\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.332107 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.410581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.900300 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.963731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2e6710-142b-4ddf-b0c7-b8dd987d0010","Type":"ContainerStarted","Data":"e0b5a5b478f3e2197d8b1ecd8a61881a60e2ceeb84ee9481251d6f1f7c83927a"} Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.965370 4787 generic.go:334] "Generic (PLEG): container finished" podID="db02f653-5216-4701-9ab6-3cf3e9352d87" containerID="5d3bd6542ee023476beb041680c535d96acb21ed4b494b28f57144c146974554" exitCode=0 Jan 26 18:05:02 crc kubenswrapper[4787]: I0126 18:05:02.965433 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pb7c9" event={"ID":"db02f653-5216-4701-9ab6-3cf3e9352d87","Type":"ContainerDied","Data":"5d3bd6542ee023476beb041680c535d96acb21ed4b494b28f57144c146974554"} Jan 26 18:05:03 crc kubenswrapper[4787]: I0126 18:05:03.599715 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8a5382-f4c9-4993-ae76-8fde9df100a4" path="/var/lib/kubelet/pods/6b8a5382-f4c9-4993-ae76-8fde9df100a4/volumes" Jan 26 18:05:03 crc kubenswrapper[4787]: I0126 18:05:03.981395 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2e6710-142b-4ddf-b0c7-b8dd987d0010","Type":"ContainerStarted","Data":"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783"} Jan 26 18:05:03 crc kubenswrapper[4787]: I0126 18:05:03.981747 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2e6710-142b-4ddf-b0c7-b8dd987d0010","Type":"ContainerStarted","Data":"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24"} Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.019740 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.019722634 podStartE2EDuration="3.019722634s" podCreationTimestamp="2026-01-26 18:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:04.009195697 +0000 UTC m=+1272.716331830" watchObservedRunningTime="2026-01-26 18:05:04.019722634 +0000 UTC m=+1272.726858767" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.084539 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.084608 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.101092 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.328532 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.431341 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.431383 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.444830 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-scripts\") pod \"db02f653-5216-4701-9ab6-3cf3e9352d87\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.444899 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-combined-ca-bundle\") pod \"db02f653-5216-4701-9ab6-3cf3e9352d87\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.444968 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fswr7\" (UniqueName: \"kubernetes.io/projected/db02f653-5216-4701-9ab6-3cf3e9352d87-kube-api-access-fswr7\") pod \"db02f653-5216-4701-9ab6-3cf3e9352d87\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.445109 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-config-data\") pod \"db02f653-5216-4701-9ab6-3cf3e9352d87\" (UID: \"db02f653-5216-4701-9ab6-3cf3e9352d87\") " Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.451220 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-scripts" (OuterVolumeSpecName: "scripts") pod "db02f653-5216-4701-9ab6-3cf3e9352d87" (UID: "db02f653-5216-4701-9ab6-3cf3e9352d87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.452126 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db02f653-5216-4701-9ab6-3cf3e9352d87-kube-api-access-fswr7" (OuterVolumeSpecName: "kube-api-access-fswr7") pod "db02f653-5216-4701-9ab6-3cf3e9352d87" (UID: "db02f653-5216-4701-9ab6-3cf3e9352d87"). InnerVolumeSpecName "kube-api-access-fswr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.457688 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.467118 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.486321 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-config-data" (OuterVolumeSpecName: "config-data") pod "db02f653-5216-4701-9ab6-3cf3e9352d87" (UID: "db02f653-5216-4701-9ab6-3cf3e9352d87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.494304 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db02f653-5216-4701-9ab6-3cf3e9352d87" (UID: "db02f653-5216-4701-9ab6-3cf3e9352d87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.545074 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-lzx9t"] Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.545414 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerName="dnsmasq-dns" containerID="cri-o://d9e9a8637c4103ef3906e62de57bd46a3f104eeda7a4bda74fefad8d9e84e2a2" gracePeriod=10 Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.547062 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.547389 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.547404 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fswr7\" (UniqueName: \"kubernetes.io/projected/db02f653-5216-4701-9ab6-3cf3e9352d87-kube-api-access-fswr7\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.547418 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db02f653-5216-4701-9ab6-3cf3e9352d87-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.993033 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pb7c9" event={"ID":"db02f653-5216-4701-9ab6-3cf3e9352d87","Type":"ContainerDied","Data":"b765338f3df72aba330c366fe5a9cf949e137ab381a63dd43218889bc78f4367"} Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.993082 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b765338f3df72aba330c366fe5a9cf949e137ab381a63dd43218889bc78f4367" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.993162 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pb7c9" Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.998342 4787 generic.go:334] "Generic (PLEG): container finished" podID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerID="d9e9a8637c4103ef3906e62de57bd46a3f104eeda7a4bda74fefad8d9e84e2a2" exitCode=0 Jan 26 18:05:04 crc kubenswrapper[4787]: I0126 18:05:04.998418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" event={"ID":"d3368f42-602d-4df9-a27d-403bd3ffae37","Type":"ContainerDied","Data":"d9e9a8637c4103ef3906e62de57bd46a3f104eeda7a4bda74fefad8d9e84e2a2"} Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.050727 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.074527 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.164054 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-nb\") pod \"d3368f42-602d-4df9-a27d-403bd3ffae37\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.164133 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-sb\") pod \"d3368f42-602d-4df9-a27d-403bd3ffae37\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.164195 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm46h\" (UniqueName: \"kubernetes.io/projected/d3368f42-602d-4df9-a27d-403bd3ffae37-kube-api-access-jm46h\") pod \"d3368f42-602d-4df9-a27d-403bd3ffae37\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.164237 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-config\") pod \"d3368f42-602d-4df9-a27d-403bd3ffae37\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.164275 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-swift-storage-0\") pod \"d3368f42-602d-4df9-a27d-403bd3ffae37\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.164366 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-svc\") pod \"d3368f42-602d-4df9-a27d-403bd3ffae37\" (UID: \"d3368f42-602d-4df9-a27d-403bd3ffae37\") " Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.166708 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.168184 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-log" containerID="cri-o://b238443b9f5decbd325576ee239422b7e86e34f9019a74fb98b37f687d737e2f" gracePeriod=30 Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.168692 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-api" containerID="cri-o://6f2d9139b484d7beab4cdf3f58277ed237ea337244111a6cb45dc212687c343e" gracePeriod=30 Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.172979 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.173175 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.173378 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3368f42-602d-4df9-a27d-403bd3ffae37-kube-api-access-jm46h" (OuterVolumeSpecName: "kube-api-access-jm46h") pod "d3368f42-602d-4df9-a27d-403bd3ffae37" (UID: "d3368f42-602d-4df9-a27d-403bd3ffae37"). InnerVolumeSpecName "kube-api-access-jm46h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.219128 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.232091 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3368f42-602d-4df9-a27d-403bd3ffae37" (UID: "d3368f42-602d-4df9-a27d-403bd3ffae37"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.238507 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3368f42-602d-4df9-a27d-403bd3ffae37" (UID: "d3368f42-602d-4df9-a27d-403bd3ffae37"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.244749 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3368f42-602d-4df9-a27d-403bd3ffae37" (UID: "d3368f42-602d-4df9-a27d-403bd3ffae37"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.255929 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-config" (OuterVolumeSpecName: "config") pod "d3368f42-602d-4df9-a27d-403bd3ffae37" (UID: "d3368f42-602d-4df9-a27d-403bd3ffae37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.258404 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3368f42-602d-4df9-a27d-403bd3ffae37" (UID: "d3368f42-602d-4df9-a27d-403bd3ffae37"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.266281 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.266305 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.266314 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.266323 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm46h\" (UniqueName: \"kubernetes.io/projected/d3368f42-602d-4df9-a27d-403bd3ffae37-kube-api-access-jm46h\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.266335 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.266343 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3368f42-602d-4df9-a27d-403bd3ffae37-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:05 crc kubenswrapper[4787]: I0126 18:05:05.583487 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.018103 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" event={"ID":"d3368f42-602d-4df9-a27d-403bd3ffae37","Type":"ContainerDied","Data":"0110c0e67aa79afedc80e411a4abc87293086bdd11a1fdc59ddfcabe93a9ea74"} Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.018151 4787 scope.go:117] "RemoveContainer" containerID="d9e9a8637c4103ef3906e62de57bd46a3f104eeda7a4bda74fefad8d9e84e2a2" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.018250 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-lzx9t" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.024751 4787 generic.go:334] "Generic (PLEG): container finished" podID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerID="b238443b9f5decbd325576ee239422b7e86e34f9019a74fb98b37f687d737e2f" exitCode=143 Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.024901 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-log" containerID="cri-o://2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24" gracePeriod=30 Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.025202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"074b52bf-dea3-41cc-8d16-9f979d1536d0","Type":"ContainerDied","Data":"b238443b9f5decbd325576ee239422b7e86e34f9019a74fb98b37f687d737e2f"} Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.025792 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-metadata" containerID="cri-o://336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783" gracePeriod=30 Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.053886 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-lzx9t"] Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.061266 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-lzx9t"] Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.066000 4787 scope.go:117] "RemoveContainer" containerID="379f17dd89ea40d49a1bc6ceb9ea880e7c897b014636927cd337f9fec894bd0b" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.647126 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.802100 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-combined-ca-bundle\") pod \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.802176 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-nova-metadata-tls-certs\") pod \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.802346 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qjpn\" (UniqueName: \"kubernetes.io/projected/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-kube-api-access-2qjpn\") pod \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.802433 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-config-data\") pod \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.802507 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-logs\") pod \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\" (UID: \"ff2e6710-142b-4ddf-b0c7-b8dd987d0010\") " Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.803086 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-logs" (OuterVolumeSpecName: "logs") pod "ff2e6710-142b-4ddf-b0c7-b8dd987d0010" (UID: "ff2e6710-142b-4ddf-b0c7-b8dd987d0010"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.817173 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-kube-api-access-2qjpn" (OuterVolumeSpecName: "kube-api-access-2qjpn") pod "ff2e6710-142b-4ddf-b0c7-b8dd987d0010" (UID: "ff2e6710-142b-4ddf-b0c7-b8dd987d0010"). InnerVolumeSpecName "kube-api-access-2qjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.831450 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff2e6710-142b-4ddf-b0c7-b8dd987d0010" (UID: "ff2e6710-142b-4ddf-b0c7-b8dd987d0010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.839924 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-config-data" (OuterVolumeSpecName: "config-data") pod "ff2e6710-142b-4ddf-b0c7-b8dd987d0010" (UID: "ff2e6710-142b-4ddf-b0c7-b8dd987d0010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.861255 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ff2e6710-142b-4ddf-b0c7-b8dd987d0010" (UID: "ff2e6710-142b-4ddf-b0c7-b8dd987d0010"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.904377 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qjpn\" (UniqueName: \"kubernetes.io/projected/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-kube-api-access-2qjpn\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.904408 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.904420 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.904429 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:06 crc kubenswrapper[4787]: I0126 18:05:06.904437 4787 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2e6710-142b-4ddf-b0c7-b8dd987d0010-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.034989 4787 generic.go:334] "Generic (PLEG): container finished" podID="fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" containerID="090e2277c794b4b34d898ea6fc2742fd889317e021791fd07b23fecfee606475" exitCode=0 Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.035057 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" event={"ID":"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f","Type":"ContainerDied","Data":"090e2277c794b4b34d898ea6fc2742fd889317e021791fd07b23fecfee606475"} Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037456 4787 generic.go:334] "Generic (PLEG): container finished" podID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerID="336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783" exitCode=0 Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037498 4787 generic.go:334] "Generic (PLEG): container finished" podID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerID="2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24" exitCode=143 Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037474 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037484 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2e6710-142b-4ddf-b0c7-b8dd987d0010","Type":"ContainerDied","Data":"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783"} Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037581 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2e6710-142b-4ddf-b0c7-b8dd987d0010","Type":"ContainerDied","Data":"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24"} Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037592 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2e6710-142b-4ddf-b0c7-b8dd987d0010","Type":"ContainerDied","Data":"e0b5a5b478f3e2197d8b1ecd8a61881a60e2ceeb84ee9481251d6f1f7c83927a"} Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037607 4787 scope.go:117] "RemoveContainer" containerID="336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.037812 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" containerName="nova-scheduler-scheduler" containerID="cri-o://bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" gracePeriod=30 Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.062592 4787 scope.go:117] "RemoveContainer" containerID="2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.084854 4787 scope.go:117] "RemoveContainer" containerID="336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.087869 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.093184 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783\": container with ID starting with 336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783 not found: ID does not exist" containerID="336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.093242 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783"} err="failed to get container status \"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783\": rpc error: code = NotFound desc = could not find container \"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783\": container with ID starting with 336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783 not found: ID does not exist" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.093288 4787 scope.go:117] "RemoveContainer" containerID="2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.101385 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.101559 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24\": container with ID starting with 2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24 not found: ID does not exist" containerID="2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.101626 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24"} err="failed to get container status \"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24\": rpc error: code = NotFound desc = could not find container \"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24\": container with ID starting with 2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24 not found: ID does not exist" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.101654 4787 scope.go:117] "RemoveContainer" containerID="336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.102793 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783"} err="failed to get container status \"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783\": rpc error: code = NotFound desc = could not find container \"336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783\": container with ID starting with 336a95591d8373b3c9646764304afeee9ade7e60b5d1e513c9b189b6c81df783 not found: ID does not exist" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.102812 4787 scope.go:117] "RemoveContainer" containerID="2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.103270 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24"} err="failed to get container status \"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24\": rpc error: code = NotFound desc = could not find container \"2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24\": container with ID starting with 2cc2f13a95de5826b07b4dd20ae8fc84065d78f541cc44a9ab6fb8ad05e50d24 not found: ID does not exist" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.114939 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.115410 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-log" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115432 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-log" Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.115461 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-metadata" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115469 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-metadata" Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.115487 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db02f653-5216-4701-9ab6-3cf3e9352d87" containerName="nova-manage" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115495 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="db02f653-5216-4701-9ab6-3cf3e9352d87" containerName="nova-manage" Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.115511 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerName="dnsmasq-dns" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115518 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerName="dnsmasq-dns" Jan 26 18:05:07 crc kubenswrapper[4787]: E0126 18:05:07.115533 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerName="init" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115542 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerName="init" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115732 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-metadata" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115745 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" containerName="dnsmasq-dns" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115762 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" containerName="nova-metadata-log" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.115783 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="db02f653-5216-4701-9ab6-3cf3e9352d87" containerName="nova-manage" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.117268 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.123379 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.123580 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.131563 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.210593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.211005 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d35ba76b-3385-47bb-bf77-8e0d523df927-logs\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.211200 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-config-data\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.211360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.211502 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vns2d\" (UniqueName: \"kubernetes.io/projected/d35ba76b-3385-47bb-bf77-8e0d523df927-kube-api-access-vns2d\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.313033 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d35ba76b-3385-47bb-bf77-8e0d523df927-logs\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.313161 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-config-data\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.313223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.313269 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vns2d\" (UniqueName: \"kubernetes.io/projected/d35ba76b-3385-47bb-bf77-8e0d523df927-kube-api-access-vns2d\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.313325 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.314247 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d35ba76b-3385-47bb-bf77-8e0d523df927-logs\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.318812 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.322479 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.322897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-config-data\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.331376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vns2d\" (UniqueName: \"kubernetes.io/projected/d35ba76b-3385-47bb-bf77-8e0d523df927-kube-api-access-vns2d\") pod \"nova-metadata-0\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.445913 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.608143 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3368f42-602d-4df9-a27d-403bd3ffae37" path="/var/lib/kubelet/pods/d3368f42-602d-4df9-a27d-403bd3ffae37/volumes" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.608819 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2e6710-142b-4ddf-b0c7-b8dd987d0010" path="/var/lib/kubelet/pods/ff2e6710-142b-4ddf-b0c7-b8dd987d0010/volumes" Jan 26 18:05:07 crc kubenswrapper[4787]: I0126 18:05:07.926020 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.048469 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d35ba76b-3385-47bb-bf77-8e0d523df927","Type":"ContainerStarted","Data":"0c042d5a05a6c48a6c98f4665b10bd784d14f605f6e4c16ae7804becd439699b"} Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.354910 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.448354 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-config-data\") pod \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.449096 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6h6x\" (UniqueName: \"kubernetes.io/projected/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-kube-api-access-l6h6x\") pod \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.449400 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-scripts\") pod \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.449557 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-combined-ca-bundle\") pod \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\" (UID: \"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f\") " Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.453176 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-scripts" (OuterVolumeSpecName: "scripts") pod "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" (UID: "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.453330 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-kube-api-access-l6h6x" (OuterVolumeSpecName: "kube-api-access-l6h6x") pod "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" (UID: "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f"). InnerVolumeSpecName "kube-api-access-l6h6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.474923 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" (UID: "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.480898 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-config-data" (OuterVolumeSpecName: "config-data") pod "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" (UID: "fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.551444 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.551484 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.551495 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6h6x\" (UniqueName: \"kubernetes.io/projected/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-kube-api-access-l6h6x\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.551505 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:08 crc kubenswrapper[4787]: I0126 18:05:08.935686 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.064153 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" event={"ID":"fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f","Type":"ContainerDied","Data":"f9cd67884bd11af0c0821d4e756ed23fd2ef08c40976aac822341ccb28fd8e6e"} Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.064203 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cd67884bd11af0c0821d4e756ed23fd2ef08c40976aac822341ccb28fd8e6e" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.064266 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8fcbn" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.073456 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d35ba76b-3385-47bb-bf77-8e0d523df927","Type":"ContainerStarted","Data":"2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811"} Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.073503 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d35ba76b-3385-47bb-bf77-8e0d523df927","Type":"ContainerStarted","Data":"634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01"} Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.115851 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.115828777 podStartE2EDuration="2.115828777s" podCreationTimestamp="2026-01-26 18:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:09.094505196 +0000 UTC m=+1277.801641339" watchObservedRunningTime="2026-01-26 18:05:09.115828777 +0000 UTC m=+1277.822964910" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.122861 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:05:09 crc kubenswrapper[4787]: E0126 18:05:09.123348 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" containerName="nova-cell1-conductor-db-sync" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.123371 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" containerName="nova-cell1-conductor-db-sync" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.123618 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" containerName="nova-cell1-conductor-db-sync" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.125016 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.129409 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.136637 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.263291 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.263370 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.263408 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngg4t\" (UniqueName: \"kubernetes.io/projected/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-kube-api-access-ngg4t\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.365166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.365494 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.365670 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngg4t\" (UniqueName: \"kubernetes.io/projected/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-kube-api-access-ngg4t\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.371012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.371396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.385249 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngg4t\" (UniqueName: \"kubernetes.io/projected/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-kube-api-access-ngg4t\") pod \"nova-cell1-conductor-0\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: E0126 18:05:09.433247 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:05:09 crc kubenswrapper[4787]: E0126 18:05:09.434503 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:05:09 crc kubenswrapper[4787]: E0126 18:05:09.435778 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:05:09 crc kubenswrapper[4787]: E0126 18:05:09.435825 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" containerName="nova-scheduler-scheduler" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.480020 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:09 crc kubenswrapper[4787]: I0126 18:05:09.933040 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:05:10 crc kubenswrapper[4787]: I0126 18:05:10.082612 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"afd68dd7-739f-4cd0-b3eb-c786b79c4b40","Type":"ContainerStarted","Data":"74078478df63213f01b88149228450d5550b6e6e9e883ed28a2d025f36e62d39"} Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.094043 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"afd68dd7-739f-4cd0-b3eb-c786b79c4b40","Type":"ContainerStarted","Data":"65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa"} Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.094411 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.100871 4787 generic.go:334] "Generic (PLEG): container finished" podID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerID="6f2d9139b484d7beab4cdf3f58277ed237ea337244111a6cb45dc212687c343e" exitCode=0 Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.100911 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"074b52bf-dea3-41cc-8d16-9f979d1536d0","Type":"ContainerDied","Data":"6f2d9139b484d7beab4cdf3f58277ed237ea337244111a6cb45dc212687c343e"} Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.120038 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.120017363 podStartE2EDuration="2.120017363s" podCreationTimestamp="2026-01-26 18:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:11.11828745 +0000 UTC m=+1279.825423583" watchObservedRunningTime="2026-01-26 18:05:11.120017363 +0000 UTC m=+1279.827153496" Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.759693 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.764674 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.925581 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55xfs\" (UniqueName: \"kubernetes.io/projected/074b52bf-dea3-41cc-8d16-9f979d1536d0-kube-api-access-55xfs\") pod \"074b52bf-dea3-41cc-8d16-9f979d1536d0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.925682 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-combined-ca-bundle\") pod \"074b52bf-dea3-41cc-8d16-9f979d1536d0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.925710 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-config-data\") pod \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.925840 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndp6m\" (UniqueName: \"kubernetes.io/projected/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-kube-api-access-ndp6m\") pod \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.925885 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-config-data\") pod \"074b52bf-dea3-41cc-8d16-9f979d1536d0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.925924 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-combined-ca-bundle\") pod \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\" (UID: \"c67738ab-5ab4-4b96-87f1-b1238e0df8fb\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.926034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074b52bf-dea3-41cc-8d16-9f979d1536d0-logs\") pod \"074b52bf-dea3-41cc-8d16-9f979d1536d0\" (UID: \"074b52bf-dea3-41cc-8d16-9f979d1536d0\") " Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.927146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/074b52bf-dea3-41cc-8d16-9f979d1536d0-logs" (OuterVolumeSpecName: "logs") pod "074b52bf-dea3-41cc-8d16-9f979d1536d0" (UID: "074b52bf-dea3-41cc-8d16-9f979d1536d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.944296 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-kube-api-access-ndp6m" (OuterVolumeSpecName: "kube-api-access-ndp6m") pod "c67738ab-5ab4-4b96-87f1-b1238e0df8fb" (UID: "c67738ab-5ab4-4b96-87f1-b1238e0df8fb"). InnerVolumeSpecName "kube-api-access-ndp6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:11 crc kubenswrapper[4787]: I0126 18:05:11.971191 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074b52bf-dea3-41cc-8d16-9f979d1536d0-kube-api-access-55xfs" (OuterVolumeSpecName: "kube-api-access-55xfs") pod "074b52bf-dea3-41cc-8d16-9f979d1536d0" (UID: "074b52bf-dea3-41cc-8d16-9f979d1536d0"). InnerVolumeSpecName "kube-api-access-55xfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.003159 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c67738ab-5ab4-4b96-87f1-b1238e0df8fb" (UID: "c67738ab-5ab4-4b96-87f1-b1238e0df8fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.008083 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "074b52bf-dea3-41cc-8d16-9f979d1536d0" (UID: "074b52bf-dea3-41cc-8d16-9f979d1536d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.027847 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074b52bf-dea3-41cc-8d16-9f979d1536d0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.027879 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55xfs\" (UniqueName: \"kubernetes.io/projected/074b52bf-dea3-41cc-8d16-9f979d1536d0-kube-api-access-55xfs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.027891 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.027903 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndp6m\" (UniqueName: \"kubernetes.io/projected/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-kube-api-access-ndp6m\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.027913 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.028557 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-config-data" (OuterVolumeSpecName: "config-data") pod "074b52bf-dea3-41cc-8d16-9f979d1536d0" (UID: "074b52bf-dea3-41cc-8d16-9f979d1536d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.033238 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-config-data" (OuterVolumeSpecName: "config-data") pod "c67738ab-5ab4-4b96-87f1-b1238e0df8fb" (UID: "c67738ab-5ab4-4b96-87f1-b1238e0df8fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.116708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"074b52bf-dea3-41cc-8d16-9f979d1536d0","Type":"ContainerDied","Data":"e65fc8a4ea6bb611c4b5bc07f818b22974e3a3818a9598785175eb092b86dfb8"} Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.116755 4787 scope.go:117] "RemoveContainer" containerID="6f2d9139b484d7beab4cdf3f58277ed237ea337244111a6cb45dc212687c343e" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.116907 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.121545 4787 generic.go:334] "Generic (PLEG): container finished" podID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" exitCode=0 Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.121626 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.121668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c67738ab-5ab4-4b96-87f1-b1238e0df8fb","Type":"ContainerDied","Data":"bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5"} Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.121700 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c67738ab-5ab4-4b96-87f1-b1238e0df8fb","Type":"ContainerDied","Data":"eb1c50bec0a7ea902c4709893df52022b4045b6d3d7895d6d181345f1150415a"} Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.129257 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074b52bf-dea3-41cc-8d16-9f979d1536d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.129309 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67738ab-5ab4-4b96-87f1-b1238e0df8fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.144429 4787 scope.go:117] "RemoveContainer" containerID="b238443b9f5decbd325576ee239422b7e86e34f9019a74fb98b37f687d737e2f" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.169148 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.183820 4787 scope.go:117] "RemoveContainer" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.208904 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.209607 4787 scope.go:117] "RemoveContainer" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" Jan 26 18:05:12 crc kubenswrapper[4787]: E0126 18:05:12.218471 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5\": container with ID starting with bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5 not found: ID does not exist" containerID="bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.218732 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5"} err="failed to get container status \"bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5\": rpc error: code = NotFound desc = could not find container \"bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5\": container with ID starting with bd1e4b7ee109e2c477d87f37c3c52d2eaae0b61aa9b4e83ed34d975bf2d03ef5 not found: ID does not exist" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.257326 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274002 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: E0126 18:05:12.274455 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-log" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274484 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-log" Jan 26 18:05:12 crc kubenswrapper[4787]: E0126 18:05:12.274520 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-api" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274529 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-api" Jan 26 18:05:12 crc kubenswrapper[4787]: E0126 18:05:12.274552 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" containerName="nova-scheduler-scheduler" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274558 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" containerName="nova-scheduler-scheduler" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274707 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" containerName="nova-scheduler-scheduler" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274725 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-api" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.274744 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" containerName="nova-api-log" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.275744 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.278169 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.297354 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.308905 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.310604 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.313236 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.320987 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.335525 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.446785 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-config-data\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.446841 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.446863 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzwc8\" (UniqueName: \"kubernetes.io/projected/e1b6ff16-f547-47ae-94ea-5fed2393e15d-kube-api-access-wzwc8\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.446909 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-config-data\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.446961 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-logs\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.446979 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.447004 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdt97\" (UniqueName: \"kubernetes.io/projected/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-kube-api-access-kdt97\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.447143 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.447271 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.548720 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-config-data\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.549171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.549207 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzwc8\" (UniqueName: \"kubernetes.io/projected/e1b6ff16-f547-47ae-94ea-5fed2393e15d-kube-api-access-wzwc8\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.549288 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-config-data\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.549346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-logs\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.549368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.549411 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdt97\" (UniqueName: \"kubernetes.io/projected/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-kube-api-access-kdt97\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.551223 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-logs\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.554730 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.566997 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.569646 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-config-data\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.571528 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-config-data\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.575930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdt97\" (UniqueName: \"kubernetes.io/projected/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-kube-api-access-kdt97\") pod \"nova-api-0\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.580671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzwc8\" (UniqueName: \"kubernetes.io/projected/e1b6ff16-f547-47ae-94ea-5fed2393e15d-kube-api-access-wzwc8\") pod \"nova-scheduler-0\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.593561 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:12 crc kubenswrapper[4787]: I0126 18:05:12.734675 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.097394 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.139176 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e","Type":"ContainerStarted","Data":"dbdf912a6e048df87834cac4280a8816831d84854a553627e6bf5c09b6105aad"} Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.140516 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.140725 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a1364ca0-6d34-493f-98b2-7956de27e72c" containerName="kube-state-metrics" containerID="cri-o://0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293" gracePeriod=30 Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.275064 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:13 crc kubenswrapper[4787]: W0126 18:05:13.304838 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1b6ff16_f547_47ae_94ea_5fed2393e15d.slice/crio-2a5e054175d5ba3f94a669d698fdcba4d9057b99354a77c4fa7324f61126ed01 WatchSource:0}: Error finding container 2a5e054175d5ba3f94a669d698fdcba4d9057b99354a77c4fa7324f61126ed01: Status 404 returned error can't find the container with id 2a5e054175d5ba3f94a669d698fdcba4d9057b99354a77c4fa7324f61126ed01 Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.531321 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.605101 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074b52bf-dea3-41cc-8d16-9f979d1536d0" path="/var/lib/kubelet/pods/074b52bf-dea3-41cc-8d16-9f979d1536d0/volumes" Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.606228 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67738ab-5ab4-4b96-87f1-b1238e0df8fb" path="/var/lib/kubelet/pods/c67738ab-5ab4-4b96-87f1-b1238e0df8fb/volumes" Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.667351 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gdkp\" (UniqueName: \"kubernetes.io/projected/a1364ca0-6d34-493f-98b2-7956de27e72c-kube-api-access-2gdkp\") pod \"a1364ca0-6d34-493f-98b2-7956de27e72c\" (UID: \"a1364ca0-6d34-493f-98b2-7956de27e72c\") " Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.672719 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1364ca0-6d34-493f-98b2-7956de27e72c-kube-api-access-2gdkp" (OuterVolumeSpecName: "kube-api-access-2gdkp") pod "a1364ca0-6d34-493f-98b2-7956de27e72c" (UID: "a1364ca0-6d34-493f-98b2-7956de27e72c"). InnerVolumeSpecName "kube-api-access-2gdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:13 crc kubenswrapper[4787]: I0126 18:05:13.770037 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gdkp\" (UniqueName: \"kubernetes.io/projected/a1364ca0-6d34-493f-98b2-7956de27e72c-kube-api-access-2gdkp\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.149248 4787 generic.go:334] "Generic (PLEG): container finished" podID="a1364ca0-6d34-493f-98b2-7956de27e72c" containerID="0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293" exitCode=2 Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.149350 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.156315 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1364ca0-6d34-493f-98b2-7956de27e72c","Type":"ContainerDied","Data":"0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293"} Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.156378 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1364ca0-6d34-493f-98b2-7956de27e72c","Type":"ContainerDied","Data":"1c2f90edace1bd4dda274dbffb8d99c279dd09ff884bfb7a5f3620a227f93b73"} Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.156401 4787 scope.go:117] "RemoveContainer" containerID="0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.165239 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e","Type":"ContainerStarted","Data":"3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa"} Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.165366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e","Type":"ContainerStarted","Data":"811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a"} Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.170289 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1b6ff16-f547-47ae-94ea-5fed2393e15d","Type":"ContainerStarted","Data":"ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3"} Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.170505 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1b6ff16-f547-47ae-94ea-5fed2393e15d","Type":"ContainerStarted","Data":"2a5e054175d5ba3f94a669d698fdcba4d9057b99354a77c4fa7324f61126ed01"} Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.185330 4787 scope.go:117] "RemoveContainer" containerID="0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293" Jan 26 18:05:14 crc kubenswrapper[4787]: E0126 18:05:14.185748 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293\": container with ID starting with 0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293 not found: ID does not exist" containerID="0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.185778 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293"} err="failed to get container status \"0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293\": rpc error: code = NotFound desc = could not find container \"0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293\": container with ID starting with 0186912e6820f020788be1b57c75c455b297df3e2de8710b190df33d15401293 not found: ID does not exist" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.186554 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.194769 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.207444 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.207425059 podStartE2EDuration="2.207425059s" podCreationTimestamp="2026-01-26 18:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:14.202658172 +0000 UTC m=+1282.909794305" watchObservedRunningTime="2026-01-26 18:05:14.207425059 +0000 UTC m=+1282.914561192" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.221202 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:05:14 crc kubenswrapper[4787]: E0126 18:05:14.221660 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1364ca0-6d34-493f-98b2-7956de27e72c" containerName="kube-state-metrics" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.221680 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1364ca0-6d34-493f-98b2-7956de27e72c" containerName="kube-state-metrics" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.221873 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1364ca0-6d34-493f-98b2-7956de27e72c" containerName="kube-state-metrics" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.222535 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.229851 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.230328 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.250208 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.259285 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.259254056 podStartE2EDuration="2.259254056s" podCreationTimestamp="2026-01-26 18:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:14.228207837 +0000 UTC m=+1282.935343970" watchObservedRunningTime="2026-01-26 18:05:14.259254056 +0000 UTC m=+1282.966390189" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.383422 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.383788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kkfm\" (UniqueName: \"kubernetes.io/projected/ca591aea-a146-4b51-887e-9688a249fdad-kube-api-access-7kkfm\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.383881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.384008 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.485324 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.485544 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.485652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kkfm\" (UniqueName: \"kubernetes.io/projected/ca591aea-a146-4b51-887e-9688a249fdad-kube-api-access-7kkfm\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.486070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.489457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.489605 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.492877 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.508807 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kkfm\" (UniqueName: \"kubernetes.io/projected/ca591aea-a146-4b51-887e-9688a249fdad-kube-api-access-7kkfm\") pod \"kube-state-metrics-0\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " pod="openstack/kube-state-metrics-0" Jan 26 18:05:14 crc kubenswrapper[4787]: I0126 18:05:14.553621 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.047611 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:05:15 crc kubenswrapper[4787]: W0126 18:05:15.057148 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca591aea_a146_4b51_887e_9688a249fdad.slice/crio-b8bb793fedf2bb76e6ea3e8223fd7eb0f2cb130108dd45414388e1e84a58afdd WatchSource:0}: Error finding container b8bb793fedf2bb76e6ea3e8223fd7eb0f2cb130108dd45414388e1e84a58afdd: Status 404 returned error can't find the container with id b8bb793fedf2bb76e6ea3e8223fd7eb0f2cb130108dd45414388e1e84a58afdd Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.191389 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ca591aea-a146-4b51-887e-9688a249fdad","Type":"ContainerStarted","Data":"b8bb793fedf2bb76e6ea3e8223fd7eb0f2cb130108dd45414388e1e84a58afdd"} Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.216672 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.217001 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-central-agent" containerID="cri-o://430d539df16f56714d22efd8e90b3c1890f372416d685fefd30069e93e6e7643" gracePeriod=30 Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.217114 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="sg-core" containerID="cri-o://fecca0cc548e59b341664307e9f68924ab9a33c465b0b433903f91d827876469" gracePeriod=30 Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.217137 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="proxy-httpd" containerID="cri-o://bc6ad5472f4b52a21d777483fb774b5bdc0e32251b09e852f7a61df7009015a5" gracePeriod=30 Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.217114 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-notification-agent" containerID="cri-o://bcd0be3eeb093ce43ce9786f0e22909ddde3933b41c4d54a2c494578c8e688b3" gracePeriod=30 Jan 26 18:05:15 crc kubenswrapper[4787]: I0126 18:05:15.599915 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1364ca0-6d34-493f-98b2-7956de27e72c" path="/var/lib/kubelet/pods/a1364ca0-6d34-493f-98b2-7956de27e72c/volumes" Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.205813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ca591aea-a146-4b51-887e-9688a249fdad","Type":"ContainerStarted","Data":"43961d5d06f3ba0dea16f2a4bbb78bb16b8ea8ace60a31feebf82fac2516b093"} Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.205981 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.212639 4787 generic.go:334] "Generic (PLEG): container finished" podID="20e74c27-77d5-499e-9b77-62b68816df9f" containerID="bc6ad5472f4b52a21d777483fb774b5bdc0e32251b09e852f7a61df7009015a5" exitCode=0 Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.212672 4787 generic.go:334] "Generic (PLEG): container finished" podID="20e74c27-77d5-499e-9b77-62b68816df9f" containerID="fecca0cc548e59b341664307e9f68924ab9a33c465b0b433903f91d827876469" exitCode=2 Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.212682 4787 generic.go:334] "Generic (PLEG): container finished" podID="20e74c27-77d5-499e-9b77-62b68816df9f" containerID="430d539df16f56714d22efd8e90b3c1890f372416d685fefd30069e93e6e7643" exitCode=0 Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.212724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerDied","Data":"bc6ad5472f4b52a21d777483fb774b5bdc0e32251b09e852f7a61df7009015a5"} Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.212783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerDied","Data":"fecca0cc548e59b341664307e9f68924ab9a33c465b0b433903f91d827876469"} Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.212800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerDied","Data":"430d539df16f56714d22efd8e90b3c1890f372416d685fefd30069e93e6e7643"} Jan 26 18:05:16 crc kubenswrapper[4787]: I0126 18:05:16.239202 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.867295479 podStartE2EDuration="2.239176889s" podCreationTimestamp="2026-01-26 18:05:14 +0000 UTC" firstStartedPulling="2026-01-26 18:05:15.058973027 +0000 UTC m=+1283.766109160" lastFinishedPulling="2026-01-26 18:05:15.430854427 +0000 UTC m=+1284.137990570" observedRunningTime="2026-01-26 18:05:16.231003209 +0000 UTC m=+1284.938139342" watchObservedRunningTime="2026-01-26 18:05:16.239176889 +0000 UTC m=+1284.946313022" Jan 26 18:05:17 crc kubenswrapper[4787]: I0126 18:05:17.446679 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:05:17 crc kubenswrapper[4787]: I0126 18:05:17.447108 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:05:17 crc kubenswrapper[4787]: I0126 18:05:17.735739 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.229974 4787 generic.go:334] "Generic (PLEG): container finished" podID="20e74c27-77d5-499e-9b77-62b68816df9f" containerID="bcd0be3eeb093ce43ce9786f0e22909ddde3933b41c4d54a2c494578c8e688b3" exitCode=0 Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.230015 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerDied","Data":"bcd0be3eeb093ce43ce9786f0e22909ddde3933b41c4d54a2c494578c8e688b3"} Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.230038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e74c27-77d5-499e-9b77-62b68816df9f","Type":"ContainerDied","Data":"b394a62e27f97b658ae6a1a527c8699a336e39313b1d57a1140963d42ba92526"} Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.230049 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b394a62e27f97b658ae6a1a527c8699a336e39313b1d57a1140963d42ba92526" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.315998 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464007 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-combined-ca-bundle\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464078 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2tft\" (UniqueName: \"kubernetes.io/projected/20e74c27-77d5-499e-9b77-62b68816df9f-kube-api-access-n2tft\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464122 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-scripts\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464149 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-config-data\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464271 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-sg-core-conf-yaml\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-run-httpd\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.464388 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-log-httpd\") pod \"20e74c27-77d5-499e-9b77-62b68816df9f\" (UID: \"20e74c27-77d5-499e-9b77-62b68816df9f\") " Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.466155 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.466139 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.466175 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.466242 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.471831 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e74c27-77d5-499e-9b77-62b68816df9f-kube-api-access-n2tft" (OuterVolumeSpecName: "kube-api-access-n2tft") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "kube-api-access-n2tft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.472265 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-scripts" (OuterVolumeSpecName: "scripts") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.498672 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.553757 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.568085 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.568130 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e74c27-77d5-499e-9b77-62b68816df9f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.568151 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.568167 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2tft\" (UniqueName: \"kubernetes.io/projected/20e74c27-77d5-499e-9b77-62b68816df9f-kube-api-access-n2tft\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.568179 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.568187 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.572026 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-config-data" (OuterVolumeSpecName: "config-data") pod "20e74c27-77d5-499e-9b77-62b68816df9f" (UID: "20e74c27-77d5-499e-9b77-62b68816df9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:18 crc kubenswrapper[4787]: I0126 18:05:18.670331 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e74c27-77d5-499e-9b77-62b68816df9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.239104 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.277035 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.287099 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.295690 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:19 crc kubenswrapper[4787]: E0126 18:05:19.296077 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-notification-agent" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296096 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-notification-agent" Jan 26 18:05:19 crc kubenswrapper[4787]: E0126 18:05:19.296115 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-central-agent" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296121 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-central-agent" Jan 26 18:05:19 crc kubenswrapper[4787]: E0126 18:05:19.296131 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="proxy-httpd" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296138 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="proxy-httpd" Jan 26 18:05:19 crc kubenswrapper[4787]: E0126 18:05:19.296165 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="sg-core" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296171 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="sg-core" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296321 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="sg-core" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296333 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-central-agent" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296344 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="proxy-httpd" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.296365 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" containerName="ceilometer-notification-agent" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.298684 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.301709 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.302729 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.306162 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.314531 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.383927 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-log-httpd\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384188 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384290 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384341 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-config-data\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sfc5\" (UniqueName: \"kubernetes.io/projected/ca4db929-9b46-4e22-9a98-a72e2aedc346-kube-api-access-4sfc5\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384424 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-scripts\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384458 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-run-httpd\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.384488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-log-httpd\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485829 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485865 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-config-data\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485881 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfc5\" (UniqueName: \"kubernetes.io/projected/ca4db929-9b46-4e22-9a98-a72e2aedc346-kube-api-access-4sfc5\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-scripts\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.485916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-run-httpd\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.486558 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-run-httpd\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.490410 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-log-httpd\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.491749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.494980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.495252 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.497723 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-config-data\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.505261 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-scripts\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.511581 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfc5\" (UniqueName: \"kubernetes.io/projected/ca4db929-9b46-4e22-9a98-a72e2aedc346-kube-api-access-4sfc5\") pod \"ceilometer-0\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " pod="openstack/ceilometer-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.522159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.600163 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e74c27-77d5-499e-9b77-62b68816df9f" path="/var/lib/kubelet/pods/20e74c27-77d5-499e-9b77-62b68816df9f/volumes" Jan 26 18:05:19 crc kubenswrapper[4787]: I0126 18:05:19.618912 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:20 crc kubenswrapper[4787]: I0126 18:05:20.100343 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:20 crc kubenswrapper[4787]: I0126 18:05:20.248657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerStarted","Data":"3a3b2881b95cd62daaa205926f15242b5928bc5a3f0f9f57dee236d61e07717f"} Jan 26 18:05:21 crc kubenswrapper[4787]: I0126 18:05:21.262861 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerStarted","Data":"700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7"} Jan 26 18:05:22 crc kubenswrapper[4787]: I0126 18:05:22.272807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerStarted","Data":"7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3"} Jan 26 18:05:22 crc kubenswrapper[4787]: I0126 18:05:22.273190 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerStarted","Data":"87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736"} Jan 26 18:05:22 crc kubenswrapper[4787]: I0126 18:05:22.594007 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:05:22 crc kubenswrapper[4787]: I0126 18:05:22.594067 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:05:22 crc kubenswrapper[4787]: I0126 18:05:22.735108 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 18:05:22 crc kubenswrapper[4787]: I0126 18:05:22.763709 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 18:05:23 crc kubenswrapper[4787]: I0126 18:05:23.322547 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 18:05:23 crc kubenswrapper[4787]: I0126 18:05:23.676167 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:05:23 crc kubenswrapper[4787]: I0126 18:05:23.676171 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:05:24 crc kubenswrapper[4787]: I0126 18:05:24.293599 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerStarted","Data":"ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73"} Jan 26 18:05:24 crc kubenswrapper[4787]: I0126 18:05:24.293876 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:05:24 crc kubenswrapper[4787]: I0126 18:05:24.573356 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 18:05:24 crc kubenswrapper[4787]: I0126 18:05:24.591335 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.093764507 podStartE2EDuration="5.59131838s" podCreationTimestamp="2026-01-26 18:05:19 +0000 UTC" firstStartedPulling="2026-01-26 18:05:20.108921331 +0000 UTC m=+1288.816057464" lastFinishedPulling="2026-01-26 18:05:23.606475204 +0000 UTC m=+1292.313611337" observedRunningTime="2026-01-26 18:05:24.337159756 +0000 UTC m=+1293.044295889" watchObservedRunningTime="2026-01-26 18:05:24.59131838 +0000 UTC m=+1293.298454503" Jan 26 18:05:27 crc kubenswrapper[4787]: I0126 18:05:27.452078 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:05:27 crc kubenswrapper[4787]: I0126 18:05:27.452797 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:05:27 crc kubenswrapper[4787]: I0126 18:05:27.461961 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:05:27 crc kubenswrapper[4787]: I0126 18:05:27.462716 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.350760 4787 generic.go:334] "Generic (PLEG): container finished" podID="c4fe2236-fb3c-42c4-b91f-ccf847666b71" containerID="a944c399e051ab43c2f49a7954480b84c4ad98a2cd0b528ec30a787b142bcdfc" exitCode=137 Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.350866 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4fe2236-fb3c-42c4-b91f-ccf847666b71","Type":"ContainerDied","Data":"a944c399e051ab43c2f49a7954480b84c4ad98a2cd0b528ec30a787b142bcdfc"} Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.867158 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.926426 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-config-data\") pod \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.926565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvbh\" (UniqueName: \"kubernetes.io/projected/c4fe2236-fb3c-42c4-b91f-ccf847666b71-kube-api-access-6vvbh\") pod \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.926601 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle\") pod \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.934041 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fe2236-fb3c-42c4-b91f-ccf847666b71-kube-api-access-6vvbh" (OuterVolumeSpecName: "kube-api-access-6vvbh") pod "c4fe2236-fb3c-42c4-b91f-ccf847666b71" (UID: "c4fe2236-fb3c-42c4-b91f-ccf847666b71"). InnerVolumeSpecName "kube-api-access-6vvbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:30 crc kubenswrapper[4787]: E0126 18:05:30.952368 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle podName:c4fe2236-fb3c-42c4-b91f-ccf847666b71 nodeName:}" failed. No retries permitted until 2026-01-26 18:05:31.45233953 +0000 UTC m=+1300.159475663 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle") pod "c4fe2236-fb3c-42c4-b91f-ccf847666b71" (UID: "c4fe2236-fb3c-42c4-b91f-ccf847666b71") : error deleting /var/lib/kubelet/pods/c4fe2236-fb3c-42c4-b91f-ccf847666b71/volume-subpaths: remove /var/lib/kubelet/pods/c4fe2236-fb3c-42c4-b91f-ccf847666b71/volume-subpaths: no such file or directory Jan 26 18:05:30 crc kubenswrapper[4787]: I0126 18:05:30.955177 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-config-data" (OuterVolumeSpecName: "config-data") pod "c4fe2236-fb3c-42c4-b91f-ccf847666b71" (UID: "c4fe2236-fb3c-42c4-b91f-ccf847666b71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.028289 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.028324 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvbh\" (UniqueName: \"kubernetes.io/projected/c4fe2236-fb3c-42c4-b91f-ccf847666b71-kube-api-access-6vvbh\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.364447 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4fe2236-fb3c-42c4-b91f-ccf847666b71","Type":"ContainerDied","Data":"08efaf9603d4fa11e7d71f55b2f111992d88b50f7e0f7795ed4738fcb38cf928"} Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.364516 4787 scope.go:117] "RemoveContainer" containerID="a944c399e051ab43c2f49a7954480b84c4ad98a2cd0b528ec30a787b142bcdfc" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.364522 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.537124 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle\") pod \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\" (UID: \"c4fe2236-fb3c-42c4-b91f-ccf847666b71\") " Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.540398 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4fe2236-fb3c-42c4-b91f-ccf847666b71" (UID: "c4fe2236-fb3c-42c4-b91f-ccf847666b71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.640030 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe2236-fb3c-42c4-b91f-ccf847666b71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.687390 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.696150 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.710785 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:05:31 crc kubenswrapper[4787]: E0126 18:05:31.711176 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fe2236-fb3c-42c4-b91f-ccf847666b71" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.711192 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fe2236-fb3c-42c4-b91f-ccf847666b71" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.711380 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fe2236-fb3c-42c4-b91f-ccf847666b71" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.711930 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.715996 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.716387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.716828 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.730923 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.741617 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.741659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzjjd\" (UniqueName: \"kubernetes.io/projected/6333914f-1303-43b8-ac9b-88c29e2bea64-kube-api-access-jzjjd\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.741684 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.741993 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.743161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.857442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.857506 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.857572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.857587 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzjjd\" (UniqueName: \"kubernetes.io/projected/6333914f-1303-43b8-ac9b-88c29e2bea64-kube-api-access-jzjjd\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.857604 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.861639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.861639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.862706 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.863197 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:31 crc kubenswrapper[4787]: I0126 18:05:31.874473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzjjd\" (UniqueName: \"kubernetes.io/projected/6333914f-1303-43b8-ac9b-88c29e2bea64-kube-api-access-jzjjd\") pod \"nova-cell1-novncproxy-0\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:32 crc kubenswrapper[4787]: I0126 18:05:32.028708 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:32 crc kubenswrapper[4787]: I0126 18:05:32.493212 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:05:32 crc kubenswrapper[4787]: I0126 18:05:32.598735 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:05:32 crc kubenswrapper[4787]: I0126 18:05:32.599376 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:05:32 crc kubenswrapper[4787]: I0126 18:05:32.602118 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:05:32 crc kubenswrapper[4787]: I0126 18:05:32.621201 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.389458 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6333914f-1303-43b8-ac9b-88c29e2bea64","Type":"ContainerStarted","Data":"769d9a6f91f28c2b54187f40ecdd34c5613915c3aa1b59893f5cdce81e1a0acf"} Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.389813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6333914f-1303-43b8-ac9b-88c29e2bea64","Type":"ContainerStarted","Data":"7f0f6e3fc225bc0f915b35abdf948a03abc7edb2914dd5a8f9893401bd85440c"} Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.390283 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.396906 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.411765 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.411742395 podStartE2EDuration="2.411742395s" podCreationTimestamp="2026-01-26 18:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:33.409306017 +0000 UTC m=+1302.116442140" watchObservedRunningTime="2026-01-26 18:05:33.411742395 +0000 UTC m=+1302.118878578" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.606752 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fe2236-fb3c-42c4-b91f-ccf847666b71" path="/var/lib/kubelet/pods/c4fe2236-fb3c-42c4-b91f-ccf847666b71/volumes" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.607511 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-zw55k"] Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.615057 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.636917 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-zw55k"] Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.697491 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.697823 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.697908 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shs4t\" (UniqueName: \"kubernetes.io/projected/890207cf-4cf3-4962-b5a1-19c076fbdeaa-kube-api-access-shs4t\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.698058 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.698085 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.698112 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.799569 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.799625 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.799662 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.799712 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.799759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.799828 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shs4t\" (UniqueName: \"kubernetes.io/projected/890207cf-4cf3-4962-b5a1-19c076fbdeaa-kube-api-access-shs4t\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.803891 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.804429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.804528 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.805272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.805458 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.841224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shs4t\" (UniqueName: \"kubernetes.io/projected/890207cf-4cf3-4962-b5a1-19c076fbdeaa-kube-api-access-shs4t\") pod \"dnsmasq-dns-fcd6f8f8f-zw55k\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:33 crc kubenswrapper[4787]: I0126 18:05:33.937543 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:34 crc kubenswrapper[4787]: I0126 18:05:34.410608 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-zw55k"] Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.410055 4787 generic.go:334] "Generic (PLEG): container finished" podID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerID="1bf368f648b16f9080c130c5f979912ccf94597615e98ae4c78c8f74cbb65e2d" exitCode=0 Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.410107 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" event={"ID":"890207cf-4cf3-4962-b5a1-19c076fbdeaa","Type":"ContainerDied","Data":"1bf368f648b16f9080c130c5f979912ccf94597615e98ae4c78c8f74cbb65e2d"} Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.412429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" event={"ID":"890207cf-4cf3-4962-b5a1-19c076fbdeaa","Type":"ContainerStarted","Data":"ccb2e7541241846f9d05971f21e6b0d12ed7cd8993a39e48515683a4e3df279e"} Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.862981 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.863253 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-central-agent" containerID="cri-o://700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7" gracePeriod=30 Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.863340 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="sg-core" containerID="cri-o://7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3" gracePeriod=30 Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.863381 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="proxy-httpd" containerID="cri-o://ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73" gracePeriod=30 Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.863340 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-notification-agent" containerID="cri-o://87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736" gracePeriod=30 Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.885197 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 18:05:35 crc kubenswrapper[4787]: I0126 18:05:35.914342 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.421175 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerID="ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73" exitCode=0 Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.421486 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerID="7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3" exitCode=2 Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.421494 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerID="700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7" exitCode=0 Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.421258 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerDied","Data":"ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73"} Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.421539 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerDied","Data":"7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3"} Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.421555 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerDied","Data":"700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7"} Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.423677 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" event={"ID":"890207cf-4cf3-4962-b5a1-19c076fbdeaa","Type":"ContainerStarted","Data":"1dc0bd2a4ecc7a68b77285c82ea23aa2dabc9c62ebde2984da45e41acbf3b11c"} Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.423794 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-log" containerID="cri-o://811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a" gracePeriod=30 Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.423856 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-api" containerID="cri-o://3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa" gracePeriod=30 Jan 26 18:05:36 crc kubenswrapper[4787]: I0126 18:05:36.458222 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" podStartSLOduration=3.458203236 podStartE2EDuration="3.458203236s" podCreationTimestamp="2026-01-26 18:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:36.449365027 +0000 UTC m=+1305.156501160" watchObservedRunningTime="2026-01-26 18:05:36.458203236 +0000 UTC m=+1305.165339369" Jan 26 18:05:37 crc kubenswrapper[4787]: I0126 18:05:37.029809 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:37 crc kubenswrapper[4787]: I0126 18:05:37.434972 4787 generic.go:334] "Generic (PLEG): container finished" podID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerID="811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a" exitCode=143 Jan 26 18:05:37 crc kubenswrapper[4787]: I0126 18:05:37.435063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e","Type":"ContainerDied","Data":"811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a"} Jan 26 18:05:37 crc kubenswrapper[4787]: I0126 18:05:37.435206 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:37 crc kubenswrapper[4787]: I0126 18:05:37.992315 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.079327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-scripts\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.079419 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-config-data\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.080265 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sfc5\" (UniqueName: \"kubernetes.io/projected/ca4db929-9b46-4e22-9a98-a72e2aedc346-kube-api-access-4sfc5\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.080348 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-sg-core-conf-yaml\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.080460 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-run-httpd\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.080509 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-combined-ca-bundle\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.080552 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-ceilometer-tls-certs\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.080828 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-log-httpd\") pod \"ca4db929-9b46-4e22-9a98-a72e2aedc346\" (UID: \"ca4db929-9b46-4e22-9a98-a72e2aedc346\") " Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.081987 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.082334 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.085375 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4db929-9b46-4e22-9a98-a72e2aedc346-kube-api-access-4sfc5" (OuterVolumeSpecName: "kube-api-access-4sfc5") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "kube-api-access-4sfc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.085970 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-scripts" (OuterVolumeSpecName: "scripts") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.115208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.161659 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.162109 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.182930 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.182983 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.182997 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.183010 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sfc5\" (UniqueName: \"kubernetes.io/projected/ca4db929-9b46-4e22-9a98-a72e2aedc346-kube-api-access-4sfc5\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.183024 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.183035 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca4db929-9b46-4e22-9a98-a72e2aedc346-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.183046 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.188679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-config-data" (OuterVolumeSpecName: "config-data") pod "ca4db929-9b46-4e22-9a98-a72e2aedc346" (UID: "ca4db929-9b46-4e22-9a98-a72e2aedc346"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.284828 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4db929-9b46-4e22-9a98-a72e2aedc346-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.448906 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerID="87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736" exitCode=0 Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.448997 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerDied","Data":"87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736"} Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.449059 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.449087 4787 scope.go:117] "RemoveContainer" containerID="ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.449073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca4db929-9b46-4e22-9a98-a72e2aedc346","Type":"ContainerDied","Data":"3a3b2881b95cd62daaa205926f15242b5928bc5a3f0f9f57dee236d61e07717f"} Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.483620 4787 scope.go:117] "RemoveContainer" containerID="7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.511614 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.516662 4787 scope.go:117] "RemoveContainer" containerID="87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.542022 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.555921 4787 scope.go:117] "RemoveContainer" containerID="700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.569697 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.570741 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-notification-agent" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.570778 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-notification-agent" Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.570818 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="sg-core" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.570825 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="sg-core" Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.570856 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="proxy-httpd" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.570864 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="proxy-httpd" Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.570882 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-central-agent" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.570891 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-central-agent" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.571441 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="proxy-httpd" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.571457 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="sg-core" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.571475 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-central-agent" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.571493 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" containerName="ceilometer-notification-agent" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.586836 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.590475 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.590745 4787 scope.go:117] "RemoveContainer" containerID="ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.591256 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.591752 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.593036 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.594624 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73\": container with ID starting with ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73 not found: ID does not exist" containerID="ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.594738 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73"} err="failed to get container status \"ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73\": rpc error: code = NotFound desc = could not find container \"ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73\": container with ID starting with ffe410a4d121865853997d89deb79705fbdb86b58897cc2f300fd80d0561da73 not found: ID does not exist" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.594852 4787 scope.go:117] "RemoveContainer" containerID="7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3" Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.596493 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3\": container with ID starting with 7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3 not found: ID does not exist" containerID="7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.596531 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3"} err="failed to get container status \"7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3\": rpc error: code = NotFound desc = could not find container \"7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3\": container with ID starting with 7cb78d86a5c24d5a170acbf28cb36ed28ce616cbf9760f6bbe3ce5037d1ca7b3 not found: ID does not exist" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.596563 4787 scope.go:117] "RemoveContainer" containerID="87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736" Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.596842 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736\": container with ID starting with 87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736 not found: ID does not exist" containerID="87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.596865 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736"} err="failed to get container status \"87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736\": rpc error: code = NotFound desc = could not find container \"87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736\": container with ID starting with 87ec71162eae799ca26c708b16bd75a0bd7eb16ba5d29f575c28c320f0e6e736 not found: ID does not exist" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.596879 4787 scope.go:117] "RemoveContainer" containerID="700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7" Jan 26 18:05:38 crc kubenswrapper[4787]: E0126 18:05:38.597103 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7\": container with ID starting with 700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7 not found: ID does not exist" containerID="700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.597121 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7"} err="failed to get container status \"700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7\": rpc error: code = NotFound desc = could not find container \"700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7\": container with ID starting with 700ec537f6202add3dbc46c135253e884925086195be6291f8c2c4314b1bffc7 not found: ID does not exist" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.693607 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.693663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-scripts\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.693724 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-log-httpd\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.694030 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnp5\" (UniqueName: \"kubernetes.io/projected/b97fef39-62f3-4457-924d-4b25c40fe88d-kube-api-access-qhnp5\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.694145 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.694197 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.694294 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-config-data\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.694333 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-run-httpd\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.796452 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-scripts\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.796544 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.797367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-log-httpd\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.797802 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-log-httpd\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.797997 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhnp5\" (UniqueName: \"kubernetes.io/projected/b97fef39-62f3-4457-924d-4b25c40fe88d-kube-api-access-qhnp5\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.798047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.798101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.798442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-config-data\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.798479 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-run-httpd\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.798852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-run-httpd\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.802107 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.802547 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.803032 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-scripts\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.803076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-config-data\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.804342 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.834601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhnp5\" (UniqueName: \"kubernetes.io/projected/b97fef39-62f3-4457-924d-4b25c40fe88d-kube-api-access-qhnp5\") pod \"ceilometer-0\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " pod="openstack/ceilometer-0" Jan 26 18:05:38 crc kubenswrapper[4787]: I0126 18:05:38.954825 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:05:39 crc kubenswrapper[4787]: I0126 18:05:39.443257 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:05:39 crc kubenswrapper[4787]: W0126 18:05:39.447820 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb97fef39_62f3_4457_924d_4b25c40fe88d.slice/crio-2dce9a4909e55352399e54211cb86252ec74af8707e45efcb033b1175108d692 WatchSource:0}: Error finding container 2dce9a4909e55352399e54211cb86252ec74af8707e45efcb033b1175108d692: Status 404 returned error can't find the container with id 2dce9a4909e55352399e54211cb86252ec74af8707e45efcb033b1175108d692 Jan 26 18:05:39 crc kubenswrapper[4787]: I0126 18:05:39.464693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerStarted","Data":"2dce9a4909e55352399e54211cb86252ec74af8707e45efcb033b1175108d692"} Jan 26 18:05:39 crc kubenswrapper[4787]: I0126 18:05:39.601890 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4db929-9b46-4e22-9a98-a72e2aedc346" path="/var/lib/kubelet/pods/ca4db929-9b46-4e22-9a98-a72e2aedc346/volumes" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.101767 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.224124 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdt97\" (UniqueName: \"kubernetes.io/projected/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-kube-api-access-kdt97\") pod \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.224455 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-config-data\") pod \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.224512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-logs\") pod \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.224550 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-combined-ca-bundle\") pod \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\" (UID: \"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e\") " Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.226487 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-logs" (OuterVolumeSpecName: "logs") pod "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" (UID: "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.231302 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-kube-api-access-kdt97" (OuterVolumeSpecName: "kube-api-access-kdt97") pod "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" (UID: "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e"). InnerVolumeSpecName "kube-api-access-kdt97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.269075 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-config-data" (OuterVolumeSpecName: "config-data") pod "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" (UID: "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.273482 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" (UID: "fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.327157 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdt97\" (UniqueName: \"kubernetes.io/projected/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-kube-api-access-kdt97\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.327190 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.327200 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.327208 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.476980 4787 generic.go:334] "Generic (PLEG): container finished" podID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerID="3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa" exitCode=0 Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.477038 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.477031 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e","Type":"ContainerDied","Data":"3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa"} Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.477206 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e","Type":"ContainerDied","Data":"dbdf912a6e048df87834cac4280a8816831d84854a553627e6bf5c09b6105aad"} Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.477231 4787 scope.go:117] "RemoveContainer" containerID="3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.480121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerStarted","Data":"85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898"} Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.503220 4787 scope.go:117] "RemoveContainer" containerID="811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.519133 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.523824 4787 scope.go:117] "RemoveContainer" containerID="3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa" Jan 26 18:05:40 crc kubenswrapper[4787]: E0126 18:05:40.525098 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa\": container with ID starting with 3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa not found: ID does not exist" containerID="3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.525149 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa"} err="failed to get container status \"3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa\": rpc error: code = NotFound desc = could not find container \"3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa\": container with ID starting with 3e0ee06bca345852bb038b03f3be138facb0623c5eaf3d31287700290f49a0aa not found: ID does not exist" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.525178 4787 scope.go:117] "RemoveContainer" containerID="811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a" Jan 26 18:05:40 crc kubenswrapper[4787]: E0126 18:05:40.525597 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a\": container with ID starting with 811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a not found: ID does not exist" containerID="811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.525651 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a"} err="failed to get container status \"811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a\": rpc error: code = NotFound desc = could not find container \"811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a\": container with ID starting with 811b4262cd288a59f1c97e1f2229acea0d92fabdc7ab4c99300ede445de11b6a not found: ID does not exist" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.533218 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.545473 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:40 crc kubenswrapper[4787]: E0126 18:05:40.545985 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-log" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.546010 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-log" Jan 26 18:05:40 crc kubenswrapper[4787]: E0126 18:05:40.546048 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-api" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.546058 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-api" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.546288 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-api" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.546322 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" containerName="nova-api-log" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.547526 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.550981 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.551346 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.551484 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.563479 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.632893 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.632972 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54ks\" (UniqueName: \"kubernetes.io/projected/83019da6-c787-45a2-a3fb-83268d56b241-kube-api-access-t54ks\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.632997 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-config-data\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.633057 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-public-tls-certs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.633125 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83019da6-c787-45a2-a3fb-83268d56b241-logs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.633140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735314 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-public-tls-certs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83019da6-c787-45a2-a3fb-83268d56b241-logs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735509 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54ks\" (UniqueName: \"kubernetes.io/projected/83019da6-c787-45a2-a3fb-83268d56b241-kube-api-access-t54ks\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735551 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-config-data\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.735901 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83019da6-c787-45a2-a3fb-83268d56b241-logs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.739555 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-config-data\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.739861 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.740529 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.741788 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-public-tls-certs\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.755925 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54ks\" (UniqueName: \"kubernetes.io/projected/83019da6-c787-45a2-a3fb-83268d56b241-kube-api-access-t54ks\") pod \"nova-api-0\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " pod="openstack/nova-api-0" Jan 26 18:05:40 crc kubenswrapper[4787]: I0126 18:05:40.865096 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:41 crc kubenswrapper[4787]: I0126 18:05:41.307090 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:41 crc kubenswrapper[4787]: W0126 18:05:41.308297 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83019da6_c787_45a2_a3fb_83268d56b241.slice/crio-acba7078c0ca83b6963199f910bdcb7fc40b34ac8bc2bdaecfd41786b73a12fc WatchSource:0}: Error finding container acba7078c0ca83b6963199f910bdcb7fc40b34ac8bc2bdaecfd41786b73a12fc: Status 404 returned error can't find the container with id acba7078c0ca83b6963199f910bdcb7fc40b34ac8bc2bdaecfd41786b73a12fc Jan 26 18:05:41 crc kubenswrapper[4787]: I0126 18:05:41.507387 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83019da6-c787-45a2-a3fb-83268d56b241","Type":"ContainerStarted","Data":"acba7078c0ca83b6963199f910bdcb7fc40b34ac8bc2bdaecfd41786b73a12fc"} Jan 26 18:05:41 crc kubenswrapper[4787]: I0126 18:05:41.517744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerStarted","Data":"33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27"} Jan 26 18:05:41 crc kubenswrapper[4787]: I0126 18:05:41.603921 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e" path="/var/lib/kubelet/pods/fb3f4c86-56fc-4ad4-b7c3-9ff64a5e5e0e/volumes" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.029551 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.046634 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.527335 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83019da6-c787-45a2-a3fb-83268d56b241","Type":"ContainerStarted","Data":"ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a"} Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.527389 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83019da6-c787-45a2-a3fb-83268d56b241","Type":"ContainerStarted","Data":"1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759"} Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.529778 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerStarted","Data":"75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c"} Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.564765 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.653780 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.653757954 podStartE2EDuration="2.653757954s" podCreationTimestamp="2026-01-26 18:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:42.590651028 +0000 UTC m=+1311.297787161" watchObservedRunningTime="2026-01-26 18:05:42.653757954 +0000 UTC m=+1311.360894087" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.834584 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dzjkz"] Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.835793 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.844021 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.844232 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.860595 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dzjkz"] Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.872244 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-config-data\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.872373 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-scripts\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.872393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.872421 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdn7l\" (UniqueName: \"kubernetes.io/projected/6da1e593-c678-4e37-9471-f261e82e004c-kube-api-access-kdn7l\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.976140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-scripts\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.976198 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.976239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdn7l\" (UniqueName: \"kubernetes.io/projected/6da1e593-c678-4e37-9471-f261e82e004c-kube-api-access-kdn7l\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.976302 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-config-data\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.981666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-scripts\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.985407 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-config-data\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:42 crc kubenswrapper[4787]: I0126 18:05:42.985733 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.000589 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdn7l\" (UniqueName: \"kubernetes.io/projected/6da1e593-c678-4e37-9471-f261e82e004c-kube-api-access-kdn7l\") pod \"nova-cell1-cell-mapping-dzjkz\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.037177 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.506747 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dzjkz"] Jan 26 18:05:43 crc kubenswrapper[4787]: W0126 18:05:43.517766 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice/crio-9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc WatchSource:0}: Error finding container 9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc: Status 404 returned error can't find the container with id 9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.540229 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dzjkz" event={"ID":"6da1e593-c678-4e37-9471-f261e82e004c","Type":"ContainerStarted","Data":"9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc"} Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.542501 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerStarted","Data":"2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb"} Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.577633 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.260524547 podStartE2EDuration="5.577607235s" podCreationTimestamp="2026-01-26 18:05:38 +0000 UTC" firstStartedPulling="2026-01-26 18:05:39.450604887 +0000 UTC m=+1308.157741040" lastFinishedPulling="2026-01-26 18:05:42.767687595 +0000 UTC m=+1311.474823728" observedRunningTime="2026-01-26 18:05:43.571391848 +0000 UTC m=+1312.278527981" watchObservedRunningTime="2026-01-26 18:05:43.577607235 +0000 UTC m=+1312.284743368" Jan 26 18:05:43 crc kubenswrapper[4787]: I0126 18:05:43.939154 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.036533 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-z45bz"] Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.036849 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="dnsmasq-dns" containerID="cri-o://c2a537cb6f61c3da89db2a989c5e41821717495c5cd63da19464b828f6e8ec9c" gracePeriod=10 Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.552777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dzjkz" event={"ID":"6da1e593-c678-4e37-9471-f261e82e004c","Type":"ContainerStarted","Data":"725267d2e5c128154a56f6d56c73937e88fdae0aa20aeb6e22a66b6377696a3c"} Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.559569 4787 generic.go:334] "Generic (PLEG): container finished" podID="8e9c8669-3692-4400-9699-a7892393fb7c" containerID="c2a537cb6f61c3da89db2a989c5e41821717495c5cd63da19464b828f6e8ec9c" exitCode=0 Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.560613 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" event={"ID":"8e9c8669-3692-4400-9699-a7892393fb7c","Type":"ContainerDied","Data":"c2a537cb6f61c3da89db2a989c5e41821717495c5cd63da19464b828f6e8ec9c"} Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.560647 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.560661 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" event={"ID":"8e9c8669-3692-4400-9699-a7892393fb7c","Type":"ContainerDied","Data":"4f570b5f1434c2d8ee93a7e536dc7e3187a799e2e02cbf7ea4948f66e4a0d044"} Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.560674 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f570b5f1434c2d8ee93a7e536dc7e3187a799e2e02cbf7ea4948f66e4a0d044" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.576565 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dzjkz" podStartSLOduration=2.5765470969999997 podStartE2EDuration="2.576547097s" podCreationTimestamp="2026-01-26 18:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:44.575009311 +0000 UTC m=+1313.282145444" watchObservedRunningTime="2026-01-26 18:05:44.576547097 +0000 UTC m=+1313.283683230" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.597163 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.637883 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-nb\") pod \"8e9c8669-3692-4400-9699-a7892393fb7c\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.637996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kllw2\" (UniqueName: \"kubernetes.io/projected/8e9c8669-3692-4400-9699-a7892393fb7c-kube-api-access-kllw2\") pod \"8e9c8669-3692-4400-9699-a7892393fb7c\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.638083 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-sb\") pod \"8e9c8669-3692-4400-9699-a7892393fb7c\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.638163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-config\") pod \"8e9c8669-3692-4400-9699-a7892393fb7c\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.638186 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-swift-storage-0\") pod \"8e9c8669-3692-4400-9699-a7892393fb7c\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.638208 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-svc\") pod \"8e9c8669-3692-4400-9699-a7892393fb7c\" (UID: \"8e9c8669-3692-4400-9699-a7892393fb7c\") " Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.664630 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9c8669-3692-4400-9699-a7892393fb7c-kube-api-access-kllw2" (OuterVolumeSpecName: "kube-api-access-kllw2") pod "8e9c8669-3692-4400-9699-a7892393fb7c" (UID: "8e9c8669-3692-4400-9699-a7892393fb7c"). InnerVolumeSpecName "kube-api-access-kllw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.713405 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e9c8669-3692-4400-9699-a7892393fb7c" (UID: "8e9c8669-3692-4400-9699-a7892393fb7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.722838 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e9c8669-3692-4400-9699-a7892393fb7c" (UID: "8e9c8669-3692-4400-9699-a7892393fb7c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.723901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e9c8669-3692-4400-9699-a7892393fb7c" (UID: "8e9c8669-3692-4400-9699-a7892393fb7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.731681 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-config" (OuterVolumeSpecName: "config") pod "8e9c8669-3692-4400-9699-a7892393fb7c" (UID: "8e9c8669-3692-4400-9699-a7892393fb7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.740319 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.740353 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kllw2\" (UniqueName: \"kubernetes.io/projected/8e9c8669-3692-4400-9699-a7892393fb7c-kube-api-access-kllw2\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.740365 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.740375 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.740385 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.755390 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e9c8669-3692-4400-9699-a7892393fb7c" (UID: "8e9c8669-3692-4400-9699-a7892393fb7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:05:44 crc kubenswrapper[4787]: I0126 18:05:44.842098 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e9c8669-3692-4400-9699-a7892393fb7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:45 crc kubenswrapper[4787]: I0126 18:05:45.567318 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" Jan 26 18:05:45 crc kubenswrapper[4787]: I0126 18:05:45.610614 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-z45bz"] Jan 26 18:05:45 crc kubenswrapper[4787]: I0126 18:05:45.621742 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-z45bz"] Jan 26 18:05:47 crc kubenswrapper[4787]: I0126 18:05:47.603774 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" path="/var/lib/kubelet/pods/8e9c8669-3692-4400-9699-a7892393fb7c/volumes" Jan 26 18:05:49 crc kubenswrapper[4787]: I0126 18:05:49.456179 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-647df7b8c5-z45bz" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: i/o timeout" Jan 26 18:05:49 crc kubenswrapper[4787]: I0126 18:05:49.638849 4787 generic.go:334] "Generic (PLEG): container finished" podID="6da1e593-c678-4e37-9471-f261e82e004c" containerID="725267d2e5c128154a56f6d56c73937e88fdae0aa20aeb6e22a66b6377696a3c" exitCode=0 Jan 26 18:05:49 crc kubenswrapper[4787]: I0126 18:05:49.638896 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dzjkz" event={"ID":"6da1e593-c678-4e37-9471-f261e82e004c","Type":"ContainerDied","Data":"725267d2e5c128154a56f6d56c73937e88fdae0aa20aeb6e22a66b6377696a3c"} Jan 26 18:05:50 crc kubenswrapper[4787]: I0126 18:05:50.866087 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:05:50 crc kubenswrapper[4787]: I0126 18:05:50.866164 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:05:50 crc kubenswrapper[4787]: I0126 18:05:50.992904 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.074725 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdn7l\" (UniqueName: \"kubernetes.io/projected/6da1e593-c678-4e37-9471-f261e82e004c-kube-api-access-kdn7l\") pod \"6da1e593-c678-4e37-9471-f261e82e004c\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.081371 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da1e593-c678-4e37-9471-f261e82e004c-kube-api-access-kdn7l" (OuterVolumeSpecName: "kube-api-access-kdn7l") pod "6da1e593-c678-4e37-9471-f261e82e004c" (UID: "6da1e593-c678-4e37-9471-f261e82e004c"). InnerVolumeSpecName "kube-api-access-kdn7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.176049 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-config-data\") pod \"6da1e593-c678-4e37-9471-f261e82e004c\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.176130 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-combined-ca-bundle\") pod \"6da1e593-c678-4e37-9471-f261e82e004c\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.176169 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-scripts\") pod \"6da1e593-c678-4e37-9471-f261e82e004c\" (UID: \"6da1e593-c678-4e37-9471-f261e82e004c\") " Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.176471 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdn7l\" (UniqueName: \"kubernetes.io/projected/6da1e593-c678-4e37-9471-f261e82e004c-kube-api-access-kdn7l\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.179289 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-scripts" (OuterVolumeSpecName: "scripts") pod "6da1e593-c678-4e37-9471-f261e82e004c" (UID: "6da1e593-c678-4e37-9471-f261e82e004c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.198930 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-config-data" (OuterVolumeSpecName: "config-data") pod "6da1e593-c678-4e37-9471-f261e82e004c" (UID: "6da1e593-c678-4e37-9471-f261e82e004c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.211963 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6da1e593-c678-4e37-9471-f261e82e004c" (UID: "6da1e593-c678-4e37-9471-f261e82e004c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.277537 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.277571 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.277580 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da1e593-c678-4e37-9471-f261e82e004c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.660765 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dzjkz" event={"ID":"6da1e593-c678-4e37-9471-f261e82e004c","Type":"ContainerDied","Data":"9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc"} Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.660804 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dzjkz" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.660815 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc" Jan 26 18:05:51 crc kubenswrapper[4787]: E0126 18:05:51.730396 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice\": RecentStats: unable to find data in memory cache]" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.858677 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.859232 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-log" containerID="cri-o://1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759" gracePeriod=30 Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.859346 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-api" containerID="cri-o://ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a" gracePeriod=30 Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.867714 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": EOF" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.867717 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": EOF" Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.889363 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.889591 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" containerName="nova-scheduler-scheduler" containerID="cri-o://ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" gracePeriod=30 Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.904725 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.905243 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-metadata" containerID="cri-o://2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811" gracePeriod=30 Jan 26 18:05:51 crc kubenswrapper[4787]: I0126 18:05:51.905161 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-log" containerID="cri-o://634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01" gracePeriod=30 Jan 26 18:05:52 crc kubenswrapper[4787]: I0126 18:05:52.671327 4787 generic.go:334] "Generic (PLEG): container finished" podID="83019da6-c787-45a2-a3fb-83268d56b241" containerID="1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759" exitCode=143 Jan 26 18:05:52 crc kubenswrapper[4787]: I0126 18:05:52.671401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83019da6-c787-45a2-a3fb-83268d56b241","Type":"ContainerDied","Data":"1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759"} Jan 26 18:05:52 crc kubenswrapper[4787]: I0126 18:05:52.673268 4787 generic.go:334] "Generic (PLEG): container finished" podID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerID="634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01" exitCode=143 Jan 26 18:05:52 crc kubenswrapper[4787]: I0126 18:05:52.673381 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d35ba76b-3385-47bb-bf77-8e0d523df927","Type":"ContainerDied","Data":"634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01"} Jan 26 18:05:52 crc kubenswrapper[4787]: E0126 18:05:52.737696 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:05:52 crc kubenswrapper[4787]: E0126 18:05:52.739697 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:05:52 crc kubenswrapper[4787]: E0126 18:05:52.740923 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:05:52 crc kubenswrapper[4787]: E0126 18:05:52.741001 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" containerName="nova-scheduler-scheduler" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.039789 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": read tcp 10.217.0.2:46466->10.217.0.189:8775: read: connection reset by peer" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.040153 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": read tcp 10.217.0.2:46480->10.217.0.189:8775: read: connection reset by peer" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.470903 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.580734 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-combined-ca-bundle\") pod \"d35ba76b-3385-47bb-bf77-8e0d523df927\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.580911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-config-data\") pod \"d35ba76b-3385-47bb-bf77-8e0d523df927\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.581020 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-nova-metadata-tls-certs\") pod \"d35ba76b-3385-47bb-bf77-8e0d523df927\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.581050 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d35ba76b-3385-47bb-bf77-8e0d523df927-logs\") pod \"d35ba76b-3385-47bb-bf77-8e0d523df927\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.581078 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vns2d\" (UniqueName: \"kubernetes.io/projected/d35ba76b-3385-47bb-bf77-8e0d523df927-kube-api-access-vns2d\") pod \"d35ba76b-3385-47bb-bf77-8e0d523df927\" (UID: \"d35ba76b-3385-47bb-bf77-8e0d523df927\") " Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.582212 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35ba76b-3385-47bb-bf77-8e0d523df927-logs" (OuterVolumeSpecName: "logs") pod "d35ba76b-3385-47bb-bf77-8e0d523df927" (UID: "d35ba76b-3385-47bb-bf77-8e0d523df927"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.587808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35ba76b-3385-47bb-bf77-8e0d523df927-kube-api-access-vns2d" (OuterVolumeSpecName: "kube-api-access-vns2d") pod "d35ba76b-3385-47bb-bf77-8e0d523df927" (UID: "d35ba76b-3385-47bb-bf77-8e0d523df927"). InnerVolumeSpecName "kube-api-access-vns2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.615810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d35ba76b-3385-47bb-bf77-8e0d523df927" (UID: "d35ba76b-3385-47bb-bf77-8e0d523df927"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.639417 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-config-data" (OuterVolumeSpecName: "config-data") pod "d35ba76b-3385-47bb-bf77-8e0d523df927" (UID: "d35ba76b-3385-47bb-bf77-8e0d523df927"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.662494 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d35ba76b-3385-47bb-bf77-8e0d523df927" (UID: "d35ba76b-3385-47bb-bf77-8e0d523df927"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.684535 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vns2d\" (UniqueName: \"kubernetes.io/projected/d35ba76b-3385-47bb-bf77-8e0d523df927-kube-api-access-vns2d\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.684571 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.684580 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.684591 4787 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d35ba76b-3385-47bb-bf77-8e0d523df927-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.684600 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d35ba76b-3385-47bb-bf77-8e0d523df927-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.701109 4787 generic.go:334] "Generic (PLEG): container finished" podID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerID="2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811" exitCode=0 Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.701188 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d35ba76b-3385-47bb-bf77-8e0d523df927","Type":"ContainerDied","Data":"2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811"} Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.701220 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d35ba76b-3385-47bb-bf77-8e0d523df927","Type":"ContainerDied","Data":"0c042d5a05a6c48a6c98f4665b10bd784d14f605f6e4c16ae7804becd439699b"} Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.701236 4787 scope.go:117] "RemoveContainer" containerID="2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.701353 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.741165 4787 scope.go:117] "RemoveContainer" containerID="634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.747750 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.763975 4787 scope.go:117] "RemoveContainer" containerID="2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811" Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.764969 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811\": container with ID starting with 2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811 not found: ID does not exist" containerID="2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.765032 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811"} err="failed to get container status \"2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811\": rpc error: code = NotFound desc = could not find container \"2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811\": container with ID starting with 2c16cc4d4e1f9099531fc723589d9290e4ec026cdc00dd7804392ce38f82f811 not found: ID does not exist" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.765069 4787 scope.go:117] "RemoveContainer" containerID="634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.765094 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.766470 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01\": container with ID starting with 634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01 not found: ID does not exist" containerID="634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.766506 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01"} err="failed to get container status \"634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01\": rpc error: code = NotFound desc = could not find container \"634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01\": container with ID starting with 634be5675309cdeedf866568f64cd463320602ab9229d2e0710a8cb8674adb01 not found: ID does not exist" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.777169 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.777847 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="dnsmasq-dns" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.777919 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="dnsmasq-dns" Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.778042 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da1e593-c678-4e37-9471-f261e82e004c" containerName="nova-manage" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778102 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da1e593-c678-4e37-9471-f261e82e004c" containerName="nova-manage" Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.778192 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-log" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778255 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-log" Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.778322 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="init" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778377 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="init" Jan 26 18:05:55 crc kubenswrapper[4787]: E0126 18:05:55.778439 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-metadata" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778510 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-metadata" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778763 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-metadata" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778839 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da1e593-c678-4e37-9471-f261e82e004c" containerName="nova-manage" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778904 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9c8669-3692-4400-9699-a7892393fb7c" containerName="dnsmasq-dns" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.778996 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" containerName="nova-metadata-log" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.780179 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.787487 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.817861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.817938 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.920802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd59l\" (UniqueName: \"kubernetes.io/projected/426dd23d-ce8f-4f72-aece-79585de1cef1-kube-api-access-cd59l\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.920910 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.920968 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.921009 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-config-data\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:55 crc kubenswrapper[4787]: I0126 18:05:55.921042 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426dd23d-ce8f-4f72-aece-79585de1cef1-logs\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.023765 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426dd23d-ce8f-4f72-aece-79585de1cef1-logs\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.023827 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd59l\" (UniqueName: \"kubernetes.io/projected/426dd23d-ce8f-4f72-aece-79585de1cef1-kube-api-access-cd59l\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.023925 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.023995 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.024047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-config-data\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.025162 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426dd23d-ce8f-4f72-aece-79585de1cef1-logs\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.027933 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-config-data\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.028485 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.028550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.041794 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd59l\" (UniqueName: \"kubernetes.io/projected/426dd23d-ce8f-4f72-aece-79585de1cef1-kube-api-access-cd59l\") pod \"nova-metadata-0\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.135420 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:05:56 crc kubenswrapper[4787]: W0126 18:05:56.568471 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod426dd23d_ce8f_4f72_aece_79585de1cef1.slice/crio-25662c378377299e3b878d86b551bd23a25f1a7e2ef8681fa62cb4c26203912b WatchSource:0}: Error finding container 25662c378377299e3b878d86b551bd23a25f1a7e2ef8681fa62cb4c26203912b: Status 404 returned error can't find the container with id 25662c378377299e3b878d86b551bd23a25f1a7e2ef8681fa62cb4c26203912b Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.573758 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:05:56 crc kubenswrapper[4787]: I0126 18:05:56.711139 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"426dd23d-ce8f-4f72-aece-79585de1cef1","Type":"ContainerStarted","Data":"25662c378377299e3b878d86b551bd23a25f1a7e2ef8681fa62cb4c26203912b"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.276493 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.447117 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzwc8\" (UniqueName: \"kubernetes.io/projected/e1b6ff16-f547-47ae-94ea-5fed2393e15d-kube-api-access-wzwc8\") pod \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.447370 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-config-data\") pod \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.447408 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-combined-ca-bundle\") pod \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\" (UID: \"e1b6ff16-f547-47ae-94ea-5fed2393e15d\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.456439 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b6ff16-f547-47ae-94ea-5fed2393e15d-kube-api-access-wzwc8" (OuterVolumeSpecName: "kube-api-access-wzwc8") pod "e1b6ff16-f547-47ae-94ea-5fed2393e15d" (UID: "e1b6ff16-f547-47ae-94ea-5fed2393e15d"). InnerVolumeSpecName "kube-api-access-wzwc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.477180 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1b6ff16-f547-47ae-94ea-5fed2393e15d" (UID: "e1b6ff16-f547-47ae-94ea-5fed2393e15d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.508396 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-config-data" (OuterVolumeSpecName: "config-data") pod "e1b6ff16-f547-47ae-94ea-5fed2393e15d" (UID: "e1b6ff16-f547-47ae-94ea-5fed2393e15d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.550019 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.550240 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b6ff16-f547-47ae-94ea-5fed2393e15d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.550324 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzwc8\" (UniqueName: \"kubernetes.io/projected/e1b6ff16-f547-47ae-94ea-5fed2393e15d-kube-api-access-wzwc8\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.601371 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35ba76b-3385-47bb-bf77-8e0d523df927" path="/var/lib/kubelet/pods/d35ba76b-3385-47bb-bf77-8e0d523df927/volumes" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.719115 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.749275 4787 generic.go:334] "Generic (PLEG): container finished" podID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" exitCode=0 Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.749371 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1b6ff16-f547-47ae-94ea-5fed2393e15d","Type":"ContainerDied","Data":"ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.749400 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1b6ff16-f547-47ae-94ea-5fed2393e15d","Type":"ContainerDied","Data":"2a5e054175d5ba3f94a669d698fdcba4d9057b99354a77c4fa7324f61126ed01"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.749415 4787 scope.go:117] "RemoveContainer" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.749503 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.754701 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t54ks\" (UniqueName: \"kubernetes.io/projected/83019da6-c787-45a2-a3fb-83268d56b241-kube-api-access-t54ks\") pod \"83019da6-c787-45a2-a3fb-83268d56b241\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.754839 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-combined-ca-bundle\") pod \"83019da6-c787-45a2-a3fb-83268d56b241\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.754939 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-config-data\") pod \"83019da6-c787-45a2-a3fb-83268d56b241\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.754997 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-public-tls-certs\") pod \"83019da6-c787-45a2-a3fb-83268d56b241\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.755075 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-internal-tls-certs\") pod \"83019da6-c787-45a2-a3fb-83268d56b241\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.755101 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83019da6-c787-45a2-a3fb-83268d56b241-logs\") pod \"83019da6-c787-45a2-a3fb-83268d56b241\" (UID: \"83019da6-c787-45a2-a3fb-83268d56b241\") " Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.755935 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83019da6-c787-45a2-a3fb-83268d56b241-logs" (OuterVolumeSpecName: "logs") pod "83019da6-c787-45a2-a3fb-83268d56b241" (UID: "83019da6-c787-45a2-a3fb-83268d56b241"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.769580 4787 generic.go:334] "Generic (PLEG): container finished" podID="83019da6-c787-45a2-a3fb-83268d56b241" containerID="ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a" exitCode=0 Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.769672 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83019da6-c787-45a2-a3fb-83268d56b241","Type":"ContainerDied","Data":"ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.769701 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83019da6-c787-45a2-a3fb-83268d56b241","Type":"ContainerDied","Data":"acba7078c0ca83b6963199f910bdcb7fc40b34ac8bc2bdaecfd41786b73a12fc"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.769776 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.781158 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83019da6-c787-45a2-a3fb-83268d56b241-kube-api-access-t54ks" (OuterVolumeSpecName: "kube-api-access-t54ks") pod "83019da6-c787-45a2-a3fb-83268d56b241" (UID: "83019da6-c787-45a2-a3fb-83268d56b241"). InnerVolumeSpecName "kube-api-access-t54ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.798166 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83019da6-c787-45a2-a3fb-83268d56b241" (UID: "83019da6-c787-45a2-a3fb-83268d56b241"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.798349 4787 scope.go:117] "RemoveContainer" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" Jan 26 18:05:57 crc kubenswrapper[4787]: E0126 18:05:57.803355 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3\": container with ID starting with ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3 not found: ID does not exist" containerID="ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.803406 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3"} err="failed to get container status \"ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3\": rpc error: code = NotFound desc = could not find container \"ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3\": container with ID starting with ef4fa3945fce03e6ca1fc07bb4f5d13d6382b5dca2f4104d5e30fc26657a57d3 not found: ID does not exist" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.803433 4787 scope.go:117] "RemoveContainer" containerID="ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.804581 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"426dd23d-ce8f-4f72-aece-79585de1cef1","Type":"ContainerStarted","Data":"ebda6eab466c6a4894926fb8d2902783e462f0d182221a273f361e590df56e44"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.804613 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"426dd23d-ce8f-4f72-aece-79585de1cef1","Type":"ContainerStarted","Data":"cae0ec0c014dae5fd5a54d328b9898e6d68c67c322796b11b3a774d86a45fd24"} Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.834038 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.834102 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.857059 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t54ks\" (UniqueName: \"kubernetes.io/projected/83019da6-c787-45a2-a3fb-83268d56b241-kube-api-access-t54ks\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.857089 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.857101 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83019da6-c787-45a2-a3fb-83268d56b241-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.867029 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-config-data" (OuterVolumeSpecName: "config-data") pod "83019da6-c787-45a2-a3fb-83268d56b241" (UID: "83019da6-c787-45a2-a3fb-83268d56b241"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.868195 4787 scope.go:117] "RemoveContainer" containerID="1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.886880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "83019da6-c787-45a2-a3fb-83268d56b241" (UID: "83019da6-c787-45a2-a3fb-83268d56b241"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.887369 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83019da6-c787-45a2-a3fb-83268d56b241" (UID: "83019da6-c787-45a2-a3fb-83268d56b241"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.897694 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:57 crc kubenswrapper[4787]: E0126 18:05:57.898458 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-log" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.898552 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-log" Jan 26 18:05:57 crc kubenswrapper[4787]: E0126 18:05:57.898665 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" containerName="nova-scheduler-scheduler" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.898745 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" containerName="nova-scheduler-scheduler" Jan 26 18:05:57 crc kubenswrapper[4787]: E0126 18:05:57.898837 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-api" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.898926 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-api" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.899329 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" containerName="nova-scheduler-scheduler" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.899437 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-api" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.899519 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="83019da6-c787-45a2-a3fb-83268d56b241" containerName="nova-api-log" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.900364 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.902761 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.902739059 podStartE2EDuration="2.902739059s" podCreationTimestamp="2026-01-26 18:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:57.846525077 +0000 UTC m=+1326.553661210" watchObservedRunningTime="2026-01-26 18:05:57.902739059 +0000 UTC m=+1326.609875192" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.904168 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.907456 4787 scope.go:117] "RemoveContainer" containerID="ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a" Jan 26 18:05:57 crc kubenswrapper[4787]: E0126 18:05:57.908098 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a\": container with ID starting with ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a not found: ID does not exist" containerID="ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.908174 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a"} err="failed to get container status \"ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a\": rpc error: code = NotFound desc = could not find container \"ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a\": container with ID starting with ea2b1d5ff32a6c8909a1daeb010c821a6ddf5196ca54d24b24212b5d03c9e34a not found: ID does not exist" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.908206 4787 scope.go:117] "RemoveContainer" containerID="1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759" Jan 26 18:05:57 crc kubenswrapper[4787]: E0126 18:05:57.908585 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759\": container with ID starting with 1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759 not found: ID does not exist" containerID="1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.908631 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759"} err="failed to get container status \"1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759\": rpc error: code = NotFound desc = could not find container \"1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759\": container with ID starting with 1ad400ca2c1e8c5fceaf94c94b6659eb4b2387bd387d4af825e00bb4955c8759 not found: ID does not exist" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.925455 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.958597 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.958634 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:57 crc kubenswrapper[4787]: I0126 18:05:57.958643 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83019da6-c787-45a2-a3fb-83268d56b241-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.060311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.060465 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2dk\" (UniqueName: \"kubernetes.io/projected/f43c375f-1176-442e-98fd-5d9acba6e199-kube-api-access-hx2dk\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.060528 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-config-data\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.110858 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.119707 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.128868 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.130334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.132259 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.132494 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.133148 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.147485 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.161741 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2dk\" (UniqueName: \"kubernetes.io/projected/f43c375f-1176-442e-98fd-5d9acba6e199-kube-api-access-hx2dk\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.161837 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-config-data\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.161926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.168844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.169501 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-config-data\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.188171 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2dk\" (UniqueName: \"kubernetes.io/projected/f43c375f-1176-442e-98fd-5d9acba6e199-kube-api-access-hx2dk\") pod \"nova-scheduler-0\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.219695 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.264161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-public-tls-certs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.264618 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-logs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.264668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.264716 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.265064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-config-data\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.265215 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shmsw\" (UniqueName: \"kubernetes.io/projected/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-kube-api-access-shmsw\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.366928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-public-tls-certs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.367020 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-logs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.367045 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.367074 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.367120 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-config-data\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.367149 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shmsw\" (UniqueName: \"kubernetes.io/projected/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-kube-api-access-shmsw\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.370439 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-logs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.371805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-public-tls-certs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.372966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.382686 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.382731 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-config-data\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.382828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shmsw\" (UniqueName: \"kubernetes.io/projected/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-kube-api-access-shmsw\") pod \"nova-api-0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.620749 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.687735 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:05:58 crc kubenswrapper[4787]: W0126 18:05:58.700855 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf43c375f_1176_442e_98fd_5d9acba6e199.slice/crio-00f01706a8da327e7bf646ef73decd923cfad633a525671c58d7586f9030a44e WatchSource:0}: Error finding container 00f01706a8da327e7bf646ef73decd923cfad633a525671c58d7586f9030a44e: Status 404 returned error can't find the container with id 00f01706a8da327e7bf646ef73decd923cfad633a525671c58d7586f9030a44e Jan 26 18:05:58 crc kubenswrapper[4787]: I0126 18:05:58.822024 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f43c375f-1176-442e-98fd-5d9acba6e199","Type":"ContainerStarted","Data":"00f01706a8da327e7bf646ef73decd923cfad633a525671c58d7586f9030a44e"} Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.140409 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.599960 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83019da6-c787-45a2-a3fb-83268d56b241" path="/var/lib/kubelet/pods/83019da6-c787-45a2-a3fb-83268d56b241/volumes" Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.601033 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b6ff16-f547-47ae-94ea-5fed2393e15d" path="/var/lib/kubelet/pods/e1b6ff16-f547-47ae-94ea-5fed2393e15d/volumes" Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.836140 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12e77a01-1165-4f0a-ad35-fe127b5ae6c0","Type":"ContainerStarted","Data":"822ecb7ff1646f9b506cca634869b822b27bb00a188755e74a6e4b48e2ab3e95"} Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.836820 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12e77a01-1165-4f0a-ad35-fe127b5ae6c0","Type":"ContainerStarted","Data":"e68164068e1c1a9792fd558fe1417586d54a36d32fce7de08c55887a3e3bda6f"} Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.836878 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12e77a01-1165-4f0a-ad35-fe127b5ae6c0","Type":"ContainerStarted","Data":"59d0423d93e070ac4ae6d1e6e027bea79213084a80f5a00eba9dbcf8ea8e9d91"} Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.837851 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f43c375f-1176-442e-98fd-5d9acba6e199","Type":"ContainerStarted","Data":"bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e"} Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.862256 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.862215313 podStartE2EDuration="1.862215313s" podCreationTimestamp="2026-01-26 18:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:59.852653236 +0000 UTC m=+1328.559789389" watchObservedRunningTime="2026-01-26 18:05:59.862215313 +0000 UTC m=+1328.569351446" Jan 26 18:05:59 crc kubenswrapper[4787]: I0126 18:05:59.875507 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8754857769999997 podStartE2EDuration="2.875485777s" podCreationTimestamp="2026-01-26 18:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:05:59.872279201 +0000 UTC m=+1328.579415354" watchObservedRunningTime="2026-01-26 18:05:59.875485777 +0000 UTC m=+1328.582621910" Jan 26 18:06:01 crc kubenswrapper[4787]: I0126 18:06:01.136365 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:06:01 crc kubenswrapper[4787]: I0126 18:06:01.136422 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:06:01 crc kubenswrapper[4787]: E0126 18:06:01.954719 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice/crio-9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice\": RecentStats: unable to find data in memory cache]" Jan 26 18:06:03 crc kubenswrapper[4787]: I0126 18:06:03.220598 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 18:06:06 crc kubenswrapper[4787]: I0126 18:06:06.135771 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:06:06 crc kubenswrapper[4787]: I0126 18:06:06.136111 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:06:07 crc kubenswrapper[4787]: I0126 18:06:07.154240 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:06:07 crc kubenswrapper[4787]: I0126 18:06:07.154246 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:06:08 crc kubenswrapper[4787]: I0126 18:06:08.220778 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 18:06:08 crc kubenswrapper[4787]: I0126 18:06:08.248085 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 18:06:08 crc kubenswrapper[4787]: I0126 18:06:08.621999 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:06:08 crc kubenswrapper[4787]: I0126 18:06:08.622056 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:06:08 crc kubenswrapper[4787]: I0126 18:06:08.966078 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 18:06:08 crc kubenswrapper[4787]: I0126 18:06:08.968941 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 18:06:09 crc kubenswrapper[4787]: I0126 18:06:09.637278 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:06:09 crc kubenswrapper[4787]: I0126 18:06:09.637269 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:06:12 crc kubenswrapper[4787]: E0126 18:06:12.184148 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice/crio-9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc\": RecentStats: unable to find data in memory cache]" Jan 26 18:06:16 crc kubenswrapper[4787]: I0126 18:06:16.141789 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:06:16 crc kubenswrapper[4787]: I0126 18:06:16.142314 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:06:16 crc kubenswrapper[4787]: I0126 18:06:16.151087 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:06:16 crc kubenswrapper[4787]: I0126 18:06:16.151796 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:06:18 crc kubenswrapper[4787]: I0126 18:06:18.631692 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:06:18 crc kubenswrapper[4787]: I0126 18:06:18.632335 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:06:18 crc kubenswrapper[4787]: I0126 18:06:18.633481 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:06:18 crc kubenswrapper[4787]: I0126 18:06:18.641386 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:06:19 crc kubenswrapper[4787]: I0126 18:06:19.040109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:06:19 crc kubenswrapper[4787]: I0126 18:06:19.046110 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:06:22 crc kubenswrapper[4787]: E0126 18:06:22.406102 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice/crio-9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice\": RecentStats: unable to find data in memory cache]" Jan 26 18:06:32 crc kubenswrapper[4787]: E0126 18:06:32.624321 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice/crio-9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc\": RecentStats: unable to find data in memory cache]" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.537657 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.539307 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="cf362037-b6e5-4dce-8da1-698fd75ff850" containerName="openstackclient" containerID="cri-o://92ee99ec8f96ad8d93512ba959ebd6a2d327963c172a5fa70335f6edb958771d" gracePeriod=2 Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.636453 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.636785 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-558e-account-create-update-b4hcn"] Jan 26 18:06:37 crc kubenswrapper[4787]: E0126 18:06:37.645361 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf362037-b6e5-4dce-8da1-698fd75ff850" containerName="openstackclient" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.645633 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf362037-b6e5-4dce-8da1-698fd75ff850" containerName="openstackclient" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.646012 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf362037-b6e5-4dce-8da1-698fd75ff850" containerName="openstackclient" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.647265 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.675962 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.688844 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-558e-account-create-update-b4hcn"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.731553 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cc2d8"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.736360 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.749968 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.754871 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cc2d8"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.780011 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.787512 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="galera" probeResult="failure" output="command timed out" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.789254 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="galera" probeResult="failure" output="command timed out" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.807019 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-558e-account-create-update-brzxq"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.817145 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts\") pod \"root-account-create-update-cc2d8\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.817384 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f7f8\" (UniqueName: \"kubernetes.io/projected/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-kube-api-access-5f7f8\") pod \"root-account-create-update-cc2d8\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.817447 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4f9\" (UniqueName: \"kubernetes.io/projected/df4a77a8-4971-432f-81db-ac6be78f24a0-kube-api-access-qq4f9\") pod \"nova-api-558e-account-create-update-b4hcn\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.817482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4a77a8-4971-432f-81db-ac6be78f24a0-operator-scripts\") pod \"nova-api-558e-account-create-update-b4hcn\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.826098 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-558e-account-create-update-brzxq"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.854774 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-zr6c9"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.856745 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.866543 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b570-account-create-update-8grdt"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.868217 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.872513 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.880660 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.885443 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-zr6c9"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.918660 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f7f8\" (UniqueName: \"kubernetes.io/projected/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-kube-api-access-5f7f8\") pod \"root-account-create-update-cc2d8\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.919299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq4f9\" (UniqueName: \"kubernetes.io/projected/df4a77a8-4971-432f-81db-ac6be78f24a0-kube-api-access-qq4f9\") pod \"nova-api-558e-account-create-update-b4hcn\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.919441 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4a77a8-4971-432f-81db-ac6be78f24a0-operator-scripts\") pod \"nova-api-558e-account-create-update-b4hcn\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:37 crc kubenswrapper[4787]: E0126 18:06:37.918991 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.920690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts\") pod \"root-account-create-update-cc2d8\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: E0126 18:06:37.921447 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data podName:57e65f25-43dd-4baf-b2fa-7256dcbd452d nodeName:}" failed. No retries permitted until 2026-01-26 18:06:38.421425404 +0000 UTC m=+1367.128561537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data") pod "rabbitmq-server-0" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d") : configmap "rabbitmq-config-data" not found Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.922187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4a77a8-4971-432f-81db-ac6be78f24a0-operator-scripts\") pod \"nova-api-558e-account-create-update-b4hcn\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.927767 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts\") pod \"root-account-create-update-cc2d8\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.928170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b36455-e624-4719-b164-29f4afacfb4d-operator-scripts\") pod \"nova-cell0-5fc4-account-create-update-zr6c9\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.928860 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8v5k\" (UniqueName: \"kubernetes.io/projected/c9b36455-e624-4719-b164-29f4afacfb4d-kube-api-access-l8v5k\") pod \"nova-cell0-5fc4-account-create-update-zr6c9\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.949729 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b570-account-create-update-8grdt"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.967917 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f7f8\" (UniqueName: \"kubernetes.io/projected/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-kube-api-access-5f7f8\") pod \"root-account-create-update-cc2d8\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.983464 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b570-account-create-update-62lx7"] Jan 26 18:06:37 crc kubenswrapper[4787]: I0126 18:06:37.987493 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq4f9\" (UniqueName: \"kubernetes.io/projected/df4a77a8-4971-432f-81db-ac6be78f24a0-kube-api-access-qq4f9\") pod \"nova-api-558e-account-create-update-b4hcn\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.045854 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-698h4"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.063484 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.071131 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8v5k\" (UniqueName: \"kubernetes.io/projected/c9b36455-e624-4719-b164-29f4afacfb4d-kube-api-access-l8v5k\") pod \"nova-cell0-5fc4-account-create-update-zr6c9\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.071563 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b36455-e624-4719-b164-29f4afacfb4d-operator-scripts\") pod \"nova-cell0-5fc4-account-create-update-zr6c9\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.071616 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce1f24f-01df-4958-8d8d-29b46e248f3a-operator-scripts\") pod \"cinder-b570-account-create-update-8grdt\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.071695 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sjcd\" (UniqueName: \"kubernetes.io/projected/5ce1f24f-01df-4958-8d8d-29b46e248f3a-kube-api-access-9sjcd\") pod \"cinder-b570-account-create-update-8grdt\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.072714 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b36455-e624-4719-b164-29f4afacfb4d-operator-scripts\") pod \"nova-cell0-5fc4-account-create-update-zr6c9\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.107218 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b570-account-create-update-62lx7"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.123445 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.130690 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-698h4"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.170276 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.171087 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="openstack-network-exporter" containerID="cri-o://0a8518325f6afe38dc22fc027b5b60a5dd918a78869776559a7fc2103c3d51df" gracePeriod=300 Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.174382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce1f24f-01df-4958-8d8d-29b46e248f3a-operator-scripts\") pod \"cinder-b570-account-create-update-8grdt\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.174465 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sjcd\" (UniqueName: \"kubernetes.io/projected/5ce1f24f-01df-4958-8d8d-29b46e248f3a-kube-api-access-9sjcd\") pod \"cinder-b570-account-create-update-8grdt\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.179993 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8v5k\" (UniqueName: \"kubernetes.io/projected/c9b36455-e624-4719-b164-29f4afacfb4d-kube-api-access-l8v5k\") pod \"nova-cell0-5fc4-account-create-update-zr6c9\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.185382 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-qttc5"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.189825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce1f24f-01df-4958-8d8d-29b46e248f3a-operator-scripts\") pod \"cinder-b570-account-create-update-8grdt\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.208397 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.229912 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-qttc5"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.258372 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-571d-account-create-update-2kgf7"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.259764 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.266385 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.304366 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sjcd\" (UniqueName: \"kubernetes.io/projected/5ce1f24f-01df-4958-8d8d-29b46e248f3a-kube-api-access-9sjcd\") pod \"cinder-b570-account-create-update-8grdt\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.365443 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-2kgf7"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.378878 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5njg\" (UniqueName: \"kubernetes.io/projected/751af144-58aa-4770-b232-98816c3498d2-kube-api-access-d5njg\") pod \"nova-cell1-571d-account-create-update-2kgf7\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.379202 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/751af144-58aa-4770-b232-98816c3498d2-operator-scripts\") pod \"nova-cell1-571d-account-create-update-2kgf7\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.419010 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-99a8-account-create-update-82x75"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.420187 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.428461 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.472442 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-99a8-account-create-update-82x75"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.487212 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10dd5fe1-04d0-4902-b12f-6fd0c592641f-operator-scripts\") pod \"glance-99a8-account-create-update-82x75\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.487323 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqnbd\" (UniqueName: \"kubernetes.io/projected/10dd5fe1-04d0-4902-b12f-6fd0c592641f-kube-api-access-rqnbd\") pod \"glance-99a8-account-create-update-82x75\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.487425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5njg\" (UniqueName: \"kubernetes.io/projected/751af144-58aa-4770-b232-98816c3498d2-kube-api-access-d5njg\") pod \"nova-cell1-571d-account-create-update-2kgf7\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.487528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/751af144-58aa-4770-b232-98816c3498d2-operator-scripts\") pod \"nova-cell1-571d-account-create-update-2kgf7\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.488462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/751af144-58aa-4770-b232-98816c3498d2-operator-scripts\") pod \"nova-cell1-571d-account-create-update-2kgf7\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: E0126 18:06:38.488540 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 26 18:06:38 crc kubenswrapper[4787]: E0126 18:06:38.488589 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data podName:57e65f25-43dd-4baf-b2fa-7256dcbd452d nodeName:}" failed. No retries permitted until 2026-01-26 18:06:39.48857241 +0000 UTC m=+1368.195708543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data") pod "rabbitmq-server-0" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d") : configmap "rabbitmq-config-data" not found Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.501338 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mvxff"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.546621 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-545b-account-create-update-wbcdd"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.547164 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.547928 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5njg\" (UniqueName: \"kubernetes.io/projected/751af144-58aa-4770-b232-98816c3498d2-kube-api-access-d5njg\") pod \"nova-cell1-571d-account-create-update-2kgf7\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.549332 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.565349 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.565558 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="ovsdbserver-sb" containerID="cri-o://0b733d1c4b7c91d6d0d94b6e6ebb1c3927dd7b9c58ec3f54aecf6573b8c5b28e" gracePeriod=300 Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.577299 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mvxff"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.591910 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10dd5fe1-04d0-4902-b12f-6fd0c592641f-operator-scripts\") pod \"glance-99a8-account-create-update-82x75\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.592301 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqnbd\" (UniqueName: \"kubernetes.io/projected/10dd5fe1-04d0-4902-b12f-6fd0c592641f-kube-api-access-rqnbd\") pod \"glance-99a8-account-create-update-82x75\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.593624 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10dd5fe1-04d0-4902-b12f-6fd0c592641f-operator-scripts\") pod \"glance-99a8-account-create-update-82x75\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.605053 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-545b-account-create-update-wbcdd"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.638185 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqnbd\" (UniqueName: \"kubernetes.io/projected/10dd5fe1-04d0-4902-b12f-6fd0c592641f-kube-api-access-rqnbd\") pod \"glance-99a8-account-create-update-82x75\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.651994 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-vbsms"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.690193 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.694151 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add7dee2-b5b8-4a04-84fd-76f323e7e444-operator-scripts\") pod \"neutron-545b-account-create-update-wbcdd\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.694266 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8rs9\" (UniqueName: \"kubernetes.io/projected/add7dee2-b5b8-4a04-84fd-76f323e7e444-kube-api-access-c8rs9\") pod \"neutron-545b-account-create-update-wbcdd\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.713715 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-gwmwp"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.744852 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-vbsms"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.752526 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.774110 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.774675 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="openstack-network-exporter" containerID="cri-o://a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd" gracePeriod=30 Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.774635 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="ovn-northd" containerID="cri-o://17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" gracePeriod=30 Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.800373 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8rs9\" (UniqueName: \"kubernetes.io/projected/add7dee2-b5b8-4a04-84fd-76f323e7e444-kube-api-access-c8rs9\") pod \"neutron-545b-account-create-update-wbcdd\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.800823 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add7dee2-b5b8-4a04-84fd-76f323e7e444-operator-scripts\") pod \"neutron-545b-account-create-update-wbcdd\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.802726 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-gwmwp"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.831145 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-99a8-account-create-update-mb52q"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.803842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add7dee2-b5b8-4a04-84fd-76f323e7e444-operator-scripts\") pod \"neutron-545b-account-create-update-wbcdd\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.864666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8rs9\" (UniqueName: \"kubernetes.io/projected/add7dee2-b5b8-4a04-84fd-76f323e7e444-kube-api-access-c8rs9\") pod \"neutron-545b-account-create-update-wbcdd\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.881025 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.881749 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="openstack-network-exporter" containerID="cri-o://c3b83431cb1aae7c46fbdf0e58b9cfa69783cc9a87a4bf1097cc0a7b9aad22e8" gracePeriod=300 Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.893217 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-99a8-account-create-update-mb52q"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.930664 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-545b-account-create-update-ql5ph"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.934613 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.983862 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-545b-account-create-update-ql5ph"] Jan 26 18:06:38 crc kubenswrapper[4787]: I0126 18:06:38.998403 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pjrz5"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.012549 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="ovsdbserver-nb" containerID="cri-o://dce4e5453293ee450ee0ab6aaf6225e5f156c6dba0c7e6bdcccbe6c15fd75397" gracePeriod=300 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.035350 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pjrz5"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.113774 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-hpbh5"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.125525 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-2j4vw"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.125777 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-2j4vw" podUID="0f8be55d-39c3-4ede-aff3-62890aa7c0e5" containerName="openstack-network-exporter" containerID="cri-o://9304d3acaa85f83d020a39079016b3b206affa7cd56bed13c15d8a89719d6e20" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.139664 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dzjkz"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.170823 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pb7c9"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.299678 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dzjkz"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.364398 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5rlw8"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.410614 4787 generic.go:334] "Generic (PLEG): container finished" podID="8125efc3-988e-4689-acea-515119c3764f" containerID="c3b83431cb1aae7c46fbdf0e58b9cfa69783cc9a87a4bf1097cc0a7b9aad22e8" exitCode=2 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.410722 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8125efc3-988e-4689-acea-515119c3764f","Type":"ContainerDied","Data":"c3b83431cb1aae7c46fbdf0e58b9cfa69783cc9a87a4bf1097cc0a7b9aad22e8"} Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.440463 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pb7c9"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.458557 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5782v"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.469130 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5782v"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.506418 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2ac3cbab-86ff-4544-bf13-b1039585edbe/ovsdbserver-sb/0.log" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.506486 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerID="0a8518325f6afe38dc22fc027b5b60a5dd918a78869776559a7fc2103c3d51df" exitCode=2 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.506504 4787 generic.go:334] "Generic (PLEG): container finished" podID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerID="0b733d1c4b7c91d6d0d94b6e6ebb1c3927dd7b9c58ec3f54aecf6573b8c5b28e" exitCode=143 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.506781 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2ac3cbab-86ff-4544-bf13-b1039585edbe","Type":"ContainerDied","Data":"0a8518325f6afe38dc22fc027b5b60a5dd918a78869776559a7fc2103c3d51df"} Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.506811 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2ac3cbab-86ff-4544-bf13-b1039585edbe","Type":"ContainerDied","Data":"0b733d1c4b7c91d6d0d94b6e6ebb1c3927dd7b9c58ec3f54aecf6573b8c5b28e"} Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.507886 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.507931 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data podName:57e65f25-43dd-4baf-b2fa-7256dcbd452d nodeName:}" failed. No retries permitted until 2026-01-26 18:06:41.507916545 +0000 UTC m=+1370.215052678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data") pod "rabbitmq-server-0" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d") : configmap "rabbitmq-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.515369 4787 generic.go:334] "Generic (PLEG): container finished" podID="208dca76-0c20-4fd9-a685-76144777c48c" containerID="a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd" exitCode=2 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.515411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"208dca76-0c20-4fd9-a685-76144777c48c","Type":"ContainerDied","Data":"a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd"} Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.532897 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.588058 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69b7546858-x7nbv"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.588504 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69b7546858-x7nbv" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-log" containerID="cri-o://d672f1c0ba676ceab5a813f55187b4369018d13a2c0eb61a432e5bca929c29df" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.588867 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69b7546858-x7nbv" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-api" containerID="cri-o://c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.607081 4787 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder-scheduler-0" secret="" err="secret \"cinder-cinder-dockercfg-qtc2j\" not found" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.615770 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64bf3812-5e05-4d90-89d9-456487b0ce0f" path="/var/lib/kubelet/pods/64bf3812-5e05-4d90-89d9-456487b0ce0f/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.616558 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c4904e-d492-4cb7-ba9f-afa2bd393aca" path="/var/lib/kubelet/pods/67c4904e-d492-4cb7-ba9f-afa2bd393aca/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.617067 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da1e593-c678-4e37-9471-f261e82e004c" path="/var/lib/kubelet/pods/6da1e593-c678-4e37-9471-f261e82e004c/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.620347 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="764c11b4-f0d4-46e4-9742-570e42729ab8" path="/var/lib/kubelet/pods/764c11b4-f0d4-46e4-9742-570e42729ab8/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.625147 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb8c355-e367-439f-8584-7cf7a80fcc79" path="/var/lib/kubelet/pods/7cb8c355-e367-439f-8584-7cf7a80fcc79/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.626161 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a38241c2-8c24-4bc3-91bb-83ee519fe085" path="/var/lib/kubelet/pods/a38241c2-8c24-4bc3-91bb-83ee519fe085/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.627803 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c19d2c-b2f5-4c8e-964b-39af5b632525" path="/var/lib/kubelet/pods/b9c19d2c-b2f5-4c8e-964b-39af5b632525/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.629343 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c875d43e-8019-4eb4-8c37-d189a8eb0a01" path="/var/lib/kubelet/pods/c875d43e-8019-4eb4-8c37-d189a8eb0a01/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.630058 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6adff5-a812-45ee-a03d-89db150f295f" path="/var/lib/kubelet/pods/cb6adff5-a812-45ee-a03d-89db150f295f/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.630738 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d251ab02-33c7-41d6-806e-3a80f332c86f" path="/var/lib/kubelet/pods/d251ab02-33c7-41d6-806e-3a80f332c86f/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.633228 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db02f653-5216-4701-9ab6-3cf3e9352d87" path="/var/lib/kubelet/pods/db02f653-5216-4701-9ab6-3cf3e9352d87/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.634312 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1eed840-68dc-40c2-b2d7-3d3b350b9a12" path="/var/lib/kubelet/pods/f1eed840-68dc-40c2-b2d7-3d3b350b9a12/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.635086 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88a529e-5e32-4b4d-b2a8-18a1e4824c88" path="/var/lib/kubelet/pods/f88a529e-5e32-4b4d-b2a8-18a1e4824c88/volumes" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.688339 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zjhv2"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.707062 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zjhv2"] Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.713844 4787 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.713965 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:40.2139235 +0000 UTC m=+1368.921059713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scheduler-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.720079 4787 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.720145 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:40.220127136 +0000 UTC m=+1368.927263269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scripts" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.720181 4787 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.720206 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:40.220197778 +0000 UTC m=+1368.927334021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.721714 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.723492 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data podName:bb1abb80-0591-49c7-b549-969066392a5a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:40.223439404 +0000 UTC m=+1368.930575607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data") pod "rabbitmq-cell1-server-0" (UID: "bb1abb80-0591-49c7-b549-969066392a5a") : configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.728929 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-zw55k"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.729211 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerName="dnsmasq-dns" containerID="cri-o://1dc0bd2a4ecc7a68b77285c82ea23aa2dabc9c62ebde2984da45e41acbf3b11c" gracePeriod=10 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.765790 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-44bz2"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.783906 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-44bz2"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.806933 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.807230 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api-log" containerID="cri-o://a522c77b7f2e449b1626089cb405e0a35925fb71dd7b4046bb28319da9af07ef" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.807736 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api" containerID="cri-o://fb5b68e300042cdeab819b0943f78da3bc13f53fb414daf36d4826013aa7717e" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.820134 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zz9w2"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.846128 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-zz9w2"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.908993 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2c05-account-create-update-bmkzn"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.936750 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" containerID="cri-o://1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.937123 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2c05-account-create-update-bmkzn"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.953699 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xlql4"] Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.957684 4787 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 26 18:06:39 crc kubenswrapper[4787]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 26 18:06:39 crc kubenswrapper[4787]: + source /usr/local/bin/container-scripts/functions Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNBridge=br-int Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNRemote=tcp:localhost:6642 Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNEncapType=geneve Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNAvailabilityZones= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ EnableChassisAsGateway=true Jan 26 18:06:39 crc kubenswrapper[4787]: ++ PhysicalNetworks= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNHostName= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 26 18:06:39 crc kubenswrapper[4787]: ++ ovs_dir=/var/lib/openvswitch Jan 26 18:06:39 crc kubenswrapper[4787]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 26 18:06:39 crc kubenswrapper[4787]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 26 18:06:39 crc kubenswrapper[4787]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 26 18:06:39 crc kubenswrapper[4787]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 26 18:06:39 crc kubenswrapper[4787]: + sleep 0.5 Jan 26 18:06:39 crc kubenswrapper[4787]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 26 18:06:39 crc kubenswrapper[4787]: + cleanup_ovsdb_server_semaphore Jan 26 18:06:39 crc kubenswrapper[4787]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 26 18:06:39 crc kubenswrapper[4787]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 26 18:06:39 crc kubenswrapper[4787]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-hpbh5" message=< Jan 26 18:06:39 crc kubenswrapper[4787]: Exiting ovsdb-server (5) [ OK ] Jan 26 18:06:39 crc kubenswrapper[4787]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 26 18:06:39 crc kubenswrapper[4787]: + source /usr/local/bin/container-scripts/functions Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNBridge=br-int Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNRemote=tcp:localhost:6642 Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNEncapType=geneve Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNAvailabilityZones= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ EnableChassisAsGateway=true Jan 26 18:06:39 crc kubenswrapper[4787]: ++ PhysicalNetworks= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNHostName= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 26 18:06:39 crc kubenswrapper[4787]: ++ ovs_dir=/var/lib/openvswitch Jan 26 18:06:39 crc kubenswrapper[4787]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 26 18:06:39 crc kubenswrapper[4787]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 26 18:06:39 crc kubenswrapper[4787]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 26 18:06:39 crc kubenswrapper[4787]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 26 18:06:39 crc kubenswrapper[4787]: + sleep 0.5 Jan 26 18:06:39 crc kubenswrapper[4787]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 26 18:06:39 crc kubenswrapper[4787]: + cleanup_ovsdb_server_semaphore Jan 26 18:06:39 crc kubenswrapper[4787]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 26 18:06:39 crc kubenswrapper[4787]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 26 18:06:39 crc kubenswrapper[4787]: > Jan 26 18:06:39 crc kubenswrapper[4787]: E0126 18:06:39.957729 4787 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 26 18:06:39 crc kubenswrapper[4787]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 26 18:06:39 crc kubenswrapper[4787]: + source /usr/local/bin/container-scripts/functions Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNBridge=br-int Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNRemote=tcp:localhost:6642 Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNEncapType=geneve Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNAvailabilityZones= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ EnableChassisAsGateway=true Jan 26 18:06:39 crc kubenswrapper[4787]: ++ PhysicalNetworks= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ OVNHostName= Jan 26 18:06:39 crc kubenswrapper[4787]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 26 18:06:39 crc kubenswrapper[4787]: ++ ovs_dir=/var/lib/openvswitch Jan 26 18:06:39 crc kubenswrapper[4787]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 26 18:06:39 crc kubenswrapper[4787]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 26 18:06:39 crc kubenswrapper[4787]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 26 18:06:39 crc kubenswrapper[4787]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 26 18:06:39 crc kubenswrapper[4787]: + sleep 0.5 Jan 26 18:06:39 crc kubenswrapper[4787]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 26 18:06:39 crc kubenswrapper[4787]: + cleanup_ovsdb_server_semaphore Jan 26 18:06:39 crc kubenswrapper[4787]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 26 18:06:39 crc kubenswrapper[4787]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 26 18:06:39 crc kubenswrapper[4787]: > pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" containerID="cri-o://71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.957759 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" containerID="cri-o://71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.977837 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xlql4"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.987051 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.987644 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-server" containerID="cri-o://ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988247 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="swift-recon-cron" containerID="cri-o://d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988336 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="rsync" containerID="cri-o://9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988425 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-expirer" containerID="cri-o://9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988482 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-updater" containerID="cri-o://bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988528 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-auditor" containerID="cri-o://a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988570 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-replicator" containerID="cri-o://3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988625 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-server" containerID="cri-o://7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988673 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-updater" containerID="cri-o://51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988715 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-auditor" containerID="cri-o://0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988761 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-replicator" containerID="cri-o://cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988819 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-server" containerID="cri-o://8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988863 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-reaper" containerID="cri-o://20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988905 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-auditor" containerID="cri-o://c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.988962 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-replicator" containerID="cri-o://98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c" gracePeriod=30 Jan 26 18:06:39 crc kubenswrapper[4787]: I0126 18:06:39.997198 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-69a3-account-create-update-vbh4d"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.004235 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: W0126 18:06:40.021171 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4a77a8_4971_432f_81db_ac6be78f24a0.slice/crio-6341c3d9da1068129c2e5e715f5cd800742640d1faf7c6b8133c93aea6557ad0 WatchSource:0}: Error finding container 6341c3d9da1068129c2e5e715f5cd800742640d1faf7c6b8133c93aea6557ad0: Status 404 returned error can't find the container with id 6341c3d9da1068129c2e5e715f5cd800742640d1faf7c6b8133c93aea6557ad0 Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.027666 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "nova_api" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="nova_api" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.027714 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-69a3-account-create-update-vbh4d"] Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.028807 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-558e-account-create-update-b4hcn" podUID="df4a77a8-4971-432f-81db-ac6be78f24a0" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.036346 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "nova_cell0" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="nova_cell0" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.040159 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.040460 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-log" containerID="cri-o://0bdd2c4e02e5b71c8136f8efddfe53e1a758f24f948ca99efec537c186ed5b0c" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.040982 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-httpd" containerID="cri-o://011bedc70e267a9340d9ea488ce83a6f2966f96cd9f36e47bd7028368ceb1135" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.042064 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" podUID="c9b36455-e624-4719-b164-29f4afacfb4d" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.056705 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.056981 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-log" containerID="cri-o://e68164068e1c1a9792fd558fe1417586d54a36d32fce7de08c55887a3e3bda6f" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.057465 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-api" containerID="cri-o://822ecb7ff1646f9b506cca634869b822b27bb00a188755e74a6e4b48e2ab3e95" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.065369 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-j6kz4"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.083435 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-j6kz4"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.120750 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.150271 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.150489 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-log" containerID="cri-o://ae0907e003fb71214ee6b4a55004aff424e981221e4c8a3a5bf129205ee9d3ee" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.150689 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-httpd" containerID="cri-o://ef0bc42b7bd2dab6b5fbabd59a7d254961bab025f7081c2b30ff94990add57bc" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.167567 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cc2d8"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.205083 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86f4885877-fz869"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.205353 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86f4885877-fz869" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-api" containerID="cri-o://3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.205827 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86f4885877-fz869" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-httpd" containerID="cri-o://8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.223023 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b570-account-create-update-8grdt"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.232794 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.233028 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-log" containerID="cri-o://cae0ec0c014dae5fd5a54d328b9898e6d68c67c322796b11b3a774d86a45fd24" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.233244 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-metadata" containerID="cri-o://ebda6eab466c6a4894926fb8d2902783e462f0d182221a273f361e590df56e44" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242457 4787 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242550 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:41.242528071 +0000 UTC m=+1369.949664204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scheduler-config-data" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242551 4787 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242588 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242599 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:41.242587792 +0000 UTC m=+1369.949723915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-config-data" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242636 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data podName:bb1abb80-0591-49c7-b549-969066392a5a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:41.242622373 +0000 UTC m=+1369.949758506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data") pod "rabbitmq-cell1-server-0" (UID: "bb1abb80-0591-49c7-b549-969066392a5a") : configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.242808 4787 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.243096 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:41.243061964 +0000 UTC m=+1369.950198107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scripts" not found Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.318573 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9rdc5"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.329413 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.329646 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener-log" containerID="cri-o://26320b4b7ea2c7895920e7f17135b8543b24a3e99a032d58fbe702acea73a2ec" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.329817 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener" containerID="cri-o://e8e42b27e122fd736da88e5dda90848d4ff33a360274bad85a5080141095b729" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.341911 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9rdc5"] Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.347900 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "cinder" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="cinder" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.349081 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-b570-account-create-update-8grdt" podUID="5ce1f24f-01df-4958-8d8d-29b46e248f3a" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.350787 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6b64b47c9-scfhg"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.351074 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6b64b47c9-scfhg" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker-log" containerID="cri-o://fe98b6784049128c409c7bb089303be8bb4a940ea14a70bee91761c87ae35d2b" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.351215 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6b64b47c9-scfhg" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker" containerID="cri-o://96b6dc83a1412919436b7c09b8eb24e55f4c4ec160f6f0f178b289df71a6d06a" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.363035 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5dfdd88467-46gsc"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.363307 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5dfdd88467-46gsc" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-httpd" containerID="cri-o://f119de1c64759eab74ed8d3728129ed6c4867af79ba4028686238074915d9c03" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.363676 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5dfdd88467-46gsc" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-server" containerID="cri-o://9087d36e049ad60d022d43c8eeebecb1463be62c5936b387b3dae05e8f883648" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.370874 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-2kgf7"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.375711 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-684775777b-sd52g"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.376148 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-684775777b-sd52g" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api-log" containerID="cri-o://346eebb01a3f72fc52b37308e2a116af63d29c1e13d92d2e955b17ea8a8bd265" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.376757 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-684775777b-sd52g" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api" containerID="cri-o://5b93e665a1925c67d4a5127df1172c0fcd71ee7863b68ee1361c8117a65ecb61" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.382841 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-zr6c9"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.389712 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-k4b5g"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.401785 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-k4b5g"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.411671 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.443024 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vftnt"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.456062 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vftnt"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.463012 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.463263 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6333914f-1303-43b8-ac9b-88c29e2bea64" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://769d9a6f91f28c2b54187f40ecdd34c5613915c3aa1b59893f5cdce81e1a0acf" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.480108 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-558e-account-create-update-b4hcn"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.480593 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="rabbitmq" containerID="cri-o://53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad" gracePeriod=604800 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.487190 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2ac3cbab-86ff-4544-bf13-b1039585edbe/ovsdbserver-sb/0.log" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.487273 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.488134 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dnqzw"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.496324 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-545b-account-create-update-wbcdd"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.509148 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dnqzw"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.521030 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xnjln"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.530890 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-99a8-account-create-update-82x75"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.548932 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xnjln"] Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.551886 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "glance" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="glance" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.553432 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-99a8-account-create-update-82x75" podUID="10dd5fe1-04d0-4902-b12f-6fd0c592641f" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.555500 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="galera" containerID="cri-o://7378f982f6cae9a52f12316a36e90af7a418c481c9a556fe677e423d6b63de51" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.560147 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.579998 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8fcbn"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.580668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-558e-account-create-update-b4hcn" event={"ID":"df4a77a8-4971-432f-81db-ac6be78f24a0","Type":"ContainerStarted","Data":"6341c3d9da1068129c2e5e715f5cd800742640d1faf7c6b8133c93aea6557ad0"} Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.586979 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "nova_api" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="nova_api" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.587462 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "nova_cell1" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="nova_cell1" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.588048 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-558e-account-create-update-b4hcn" podUID="df4a77a8-4971-432f-81db-ac6be78f24a0" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.589318 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" podUID="751af144-58aa-4770-b232-98816c3498d2" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.605485 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "neutron" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="neutron" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.606711 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-545b-account-create-update-wbcdd" podUID="add7dee2-b5b8-4a04-84fd-76f323e7e444" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.608382 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.608661 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" containerName="nova-cell1-conductor-conductor" containerID="cri-o://65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.620987 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8fcbn"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.650934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdbserver-sb-tls-certs\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.651019 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.651153 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-metrics-certs-tls-certs\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.651816 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddmlh\" (UniqueName: \"kubernetes.io/projected/2ac3cbab-86ff-4544-bf13-b1039585edbe-kube-api-access-ddmlh\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.651881 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-scripts\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.651924 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-config\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.652014 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdb-rundir\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.652096 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-combined-ca-bundle\") pod \"2ac3cbab-86ff-4544-bf13-b1039585edbe\" (UID: \"2ac3cbab-86ff-4544-bf13-b1039585edbe\") " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.653054 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-config" (OuterVolumeSpecName: "config") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.653468 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-scripts" (OuterVolumeSpecName: "scripts") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.657553 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.657583 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac3cbab-86ff-4544-bf13-b1039585edbe-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.659488 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.672858 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.672925 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac3cbab-86ff-4544-bf13-b1039585edbe-kube-api-access-ddmlh" (OuterVolumeSpecName: "kube-api-access-ddmlh") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "kube-api-access-ddmlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.683102 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-zr6c9"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691830 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691863 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691871 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691879 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691886 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691894 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691901 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691907 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691914 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.691921 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692126 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692138 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692147 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692156 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692196 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692205 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692213 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692222 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692708 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.692870 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="41cca919-781a-48fb-99c1-ec7ebbb7c601" containerName="nova-cell0-conductor-conductor" containerID="cri-o://dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.701828 4787 generic.go:334] "Generic (PLEG): container finished" podID="254806cc-4007-4a34-9852-0716b123830f" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.701918 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerDied","Data":"71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.705451 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qpg78"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.711681 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.718762 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qpg78"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.724188 4787 generic.go:334] "Generic (PLEG): container finished" podID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerID="d672f1c0ba676ceab5a813f55187b4369018d13a2c0eb61a432e5bca929c29df" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.724289 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b7546858-x7nbv" event={"ID":"3f8a7212-44ab-42f1-86be-8b79726fe4f8","Type":"ContainerDied","Data":"d672f1c0ba676ceab5a813f55187b4369018d13a2c0eb61a432e5bca929c29df"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.729458 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-558e-account-create-update-b4hcn"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.738553 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cc2d8" event={"ID":"e52ed138-0046-4bf6-b8f0-7bd5fb016f16","Type":"ContainerStarted","Data":"bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.740276 4787 scope.go:117] "RemoveContainer" containerID="bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.741913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cc2d8" event={"ID":"e52ed138-0046-4bf6-b8f0-7bd5fb016f16","Type":"ContainerStarted","Data":"a5d5338d9c8213b816cbb1421fa1bd0746a67c5143c7fb9b9ef0237407ce911e"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.748152 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b570-account-create-update-8grdt"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.751103 4787 generic.go:334] "Generic (PLEG): container finished" podID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerID="0bdd2c4e02e5b71c8136f8efddfe53e1a758f24f948ca99efec537c186ed5b0c" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.751276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a","Type":"ContainerDied","Data":"0bdd2c4e02e5b71c8136f8efddfe53e1a758f24f948ca99efec537c186ed5b0c"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.763349 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.763383 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.763421 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.763432 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddmlh\" (UniqueName: \"kubernetes.io/projected/2ac3cbab-86ff-4544-bf13-b1039585edbe-kube-api-access-ddmlh\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.765740 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8125efc3-988e-4689-acea-515119c3764f/ovsdbserver-nb/0.log" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.765783 4787 generic.go:334] "Generic (PLEG): container finished" podID="8125efc3-988e-4689-acea-515119c3764f" containerID="dce4e5453293ee450ee0ab6aaf6225e5f156c6dba0c7e6bdcccbe6c15fd75397" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.765849 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8125efc3-988e-4689-acea-515119c3764f","Type":"ContainerDied","Data":"dce4e5453293ee450ee0ab6aaf6225e5f156c6dba0c7e6bdcccbe6c15fd75397"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.767589 4787 generic.go:334] "Generic (PLEG): container finished" podID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerID="1dc0bd2a4ecc7a68b77285c82ea23aa2dabc9c62ebde2984da45e41acbf3b11c" exitCode=0 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.767645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" event={"ID":"890207cf-4cf3-4962-b5a1-19c076fbdeaa","Type":"ContainerDied","Data":"1dc0bd2a4ecc7a68b77285c82ea23aa2dabc9c62ebde2984da45e41acbf3b11c"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.771495 4787 generic.go:334] "Generic (PLEG): container finished" podID="cf362037-b6e5-4dce-8da1-698fd75ff850" containerID="92ee99ec8f96ad8d93512ba959ebd6a2d327963c172a5fa70335f6edb958771d" exitCode=137 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.773419 4787 generic.go:334] "Generic (PLEG): container finished" podID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerID="a522c77b7f2e449b1626089cb405e0a35925fb71dd7b4046bb28319da9af07ef" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.773474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44392b57-bc6b-4a8b-8ff3-346fab2422af","Type":"ContainerDied","Data":"a522c77b7f2e449b1626089cb405e0a35925fb71dd7b4046bb28319da9af07ef"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.797333 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-99a8-account-create-update-82x75"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.800266 4787 generic.go:334] "Generic (PLEG): container finished" podID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerID="26320b4b7ea2c7895920e7f17135b8543b24a3e99a032d58fbe702acea73a2ec" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.800335 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" event={"ID":"c27796ea-3db5-42ad-8b22-e4d774e28578","Type":"ContainerDied","Data":"26320b4b7ea2c7895920e7f17135b8543b24a3e99a032d58fbe702acea73a2ec"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.806077 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.806703 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2j4vw_0f8be55d-39c3-4ede-aff3-62890aa7c0e5/openstack-network-exporter/0.log" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.806748 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f8be55d-39c3-4ede-aff3-62890aa7c0e5" containerID="9304d3acaa85f83d020a39079016b3b206affa7cd56bed13c15d8a89719d6e20" exitCode=2 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.806809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2j4vw" event={"ID":"0f8be55d-39c3-4ede-aff3-62890aa7c0e5","Type":"ContainerDied","Data":"9304d3acaa85f83d020a39079016b3b206affa7cd56bed13c15d8a89719d6e20"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.808874 4787 generic.go:334] "Generic (PLEG): container finished" podID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerID="cae0ec0c014dae5fd5a54d328b9898e6d68c67c322796b11b3a774d86a45fd24" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.808930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"426dd23d-ce8f-4f72-aece-79585de1cef1","Type":"ContainerDied","Data":"cae0ec0c014dae5fd5a54d328b9898e6d68c67c322796b11b3a774d86a45fd24"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.809253 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.819499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" event={"ID":"c9b36455-e624-4719-b164-29f4afacfb4d","Type":"ContainerStarted","Data":"8145188e3fa3588b15b88579918745d5400b11e460ccc129029ee3a31fc31aeb"} Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.822639 4787 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 18:06:40 crc kubenswrapper[4787]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: if [ -n "nova_cell0" ]; then Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="nova_cell0" Jan 26 18:06:40 crc kubenswrapper[4787]: else Jan 26 18:06:40 crc kubenswrapper[4787]: GRANT_DATABASE="*" Jan 26 18:06:40 crc kubenswrapper[4787]: fi Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: # going for maximum compatibility here: Jan 26 18:06:40 crc kubenswrapper[4787]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 26 18:06:40 crc kubenswrapper[4787]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 26 18:06:40 crc kubenswrapper[4787]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 26 18:06:40 crc kubenswrapper[4787]: # support updates Jan 26 18:06:40 crc kubenswrapper[4787]: Jan 26 18:06:40 crc kubenswrapper[4787]: $MYSQL_CMD < logger="UnhandledError" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.822679 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b570-account-create-update-8grdt" event={"ID":"5ce1f24f-01df-4958-8d8d-29b46e248f3a","Type":"ContainerStarted","Data":"afd1c412a5d1e02ce224dd3a619c5d9371024d8e743c6aebd0f8d0bcfda748b4"} Jan 26 18:06:40 crc kubenswrapper[4787]: E0126 18:06:40.824247 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" podUID="c9b36455-e624-4719-b164-29f4afacfb4d" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.831071 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-99a8-account-create-update-82x75" event={"ID":"10dd5fe1-04d0-4902-b12f-6fd0c592641f","Type":"ContainerStarted","Data":"cd26f656ccd4bfb3f85a26d7c8140940c273e7279f4eb42eb2a95de27f0b6ee8"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.835525 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.835693 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f43c375f-1176-442e-98fd-5d9acba6e199" containerName="nova-scheduler-scheduler" containerID="cri-o://bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.850015 4787 generic.go:334] "Generic (PLEG): container finished" podID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerID="e68164068e1c1a9792fd558fe1417586d54a36d32fce7de08c55887a3e3bda6f" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.850087 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12e77a01-1165-4f0a-ad35-fe127b5ae6c0","Type":"ContainerDied","Data":"e68164068e1c1a9792fd558fe1417586d54a36d32fce7de08c55887a3e3bda6f"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.851212 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-545b-account-create-update-wbcdd"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.856881 4787 generic.go:334] "Generic (PLEG): container finished" podID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerID="ae0907e003fb71214ee6b4a55004aff424e981221e4c8a3a5bf129205ee9d3ee" exitCode=143 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.856977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"87c0c3c8-9282-45e5-b376-9c335e24573a","Type":"ContainerDied","Data":"ae0907e003fb71214ee6b4a55004aff424e981221e4c8a3a5bf129205ee9d3ee"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.862830 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2ac3cbab-86ff-4544-bf13-b1039585edbe/ovsdbserver-sb/0.log" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.863268 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.863691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2ac3cbab-86ff-4544-bf13-b1039585edbe","Type":"ContainerDied","Data":"7acef36f7fa4c435d02da4ded69dacd227ecd1d664592ba05f6e29cafd408791"} Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.863734 4787 scope.go:117] "RemoveContainer" containerID="0a8518325f6afe38dc22fc027b5b60a5dd918a78869776559a7fc2103c3d51df" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.864058 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="cinder-scheduler" containerID="cri-o://7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.864360 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="probe" containerID="cri-o://a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d" gracePeriod=30 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.865563 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.865590 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.941765 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-2kgf7"] Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.947672 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="rabbitmq" containerID="cri-o://6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd" gracePeriod=604800 Jan 26 18:06:40 crc kubenswrapper[4787]: I0126 18:06:40.981541 4787 scope.go:117] "RemoveContainer" containerID="0b733d1c4b7c91d6d0d94b6e6ebb1c3927dd7b9c58ec3f54aecf6573b8c5b28e" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.108246 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2ac3cbab-86ff-4544-bf13-b1039585edbe" (UID: "2ac3cbab-86ff-4544-bf13-b1039585edbe"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.187410 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3cbab-86ff-4544-bf13-b1039585edbe-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.249480 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.293318 4787 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.293408 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.293393324 +0000 UTC m=+1372.000529457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scheduler-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.293925 4787 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.293989 4787 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.294020 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.294008198 +0000 UTC m=+1372.001144331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.294068 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.294073 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.294052369 +0000 UTC m=+1372.001188502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scripts" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.294160 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data podName:bb1abb80-0591-49c7-b549-969066392a5a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.29408891 +0000 UTC m=+1372.001225143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data") pod "rabbitmq-cell1-server-0" (UID: "bb1abb80-0591-49c7-b549-969066392a5a") : configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.312561 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.355778 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.394859 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrj7p\" (UniqueName: \"kubernetes.io/projected/cf362037-b6e5-4dce-8da1-698fd75ff850-kube-api-access-wrj7p\") pod \"cf362037-b6e5-4dce-8da1-698fd75ff850\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.395048 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config\") pod \"cf362037-b6e5-4dce-8da1-698fd75ff850\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.395169 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config-secret\") pod \"cf362037-b6e5-4dce-8da1-698fd75ff850\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.395336 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-combined-ca-bundle\") pod \"cf362037-b6e5-4dce-8da1-698fd75ff850\" (UID: \"cf362037-b6e5-4dce-8da1-698fd75ff850\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.408894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf362037-b6e5-4dce-8da1-698fd75ff850-kube-api-access-wrj7p" (OuterVolumeSpecName: "kube-api-access-wrj7p") pod "cf362037-b6e5-4dce-8da1-698fd75ff850" (UID: "cf362037-b6e5-4dce-8da1-698fd75ff850"). InnerVolumeSpecName "kube-api-access-wrj7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.432624 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf362037-b6e5-4dce-8da1-698fd75ff850" (UID: "cf362037-b6e5-4dce-8da1-698fd75ff850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.453269 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cf362037-b6e5-4dce-8da1-698fd75ff850" (UID: "cf362037-b6e5-4dce-8da1-698fd75ff850"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.469544 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2j4vw_0f8be55d-39c3-4ede-aff3-62890aa7c0e5/openstack-network-exporter/0.log" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.469615 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.497736 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.498356 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrj7p\" (UniqueName: \"kubernetes.io/projected/cf362037-b6e5-4dce-8da1-698fd75ff850-kube-api-access-wrj7p\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.498480 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.518238 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8125efc3-988e-4689-acea-515119c3764f/ovsdbserver-nb/0.log" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.518316 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.544543 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.597911 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cf362037-b6e5-4dce-8da1-698fd75ff850" (UID: "cf362037-b6e5-4dce-8da1-698fd75ff850"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.602659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8125efc3-988e-4689-acea-515119c3764f-ovsdb-rundir\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604078 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-scripts\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604194 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-combined-ca-bundle\") pod \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-config\") pod \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604502 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h5z4\" (UniqueName: \"kubernetes.io/projected/8125efc3-988e-4689-acea-515119c3764f-kube-api-access-7h5z4\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604600 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-combined-ca-bundle\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovs-rundir\") pod \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604757 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604834 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-config\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.605034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovn-rundir\") pod \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.605110 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtggw\" (UniqueName: \"kubernetes.io/projected/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-kube-api-access-rtggw\") pod \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.605178 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-ovsdbserver-nb-tls-certs\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.605265 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-metrics-certs-tls-certs\") pod \"8125efc3-988e-4689-acea-515119c3764f\" (UID: \"8125efc3-988e-4689-acea-515119c3764f\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.605361 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-metrics-certs-tls-certs\") pod \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\" (UID: \"0f8be55d-39c3-4ede-aff3-62890aa7c0e5\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.608252 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf362037-b6e5-4dce-8da1-698fd75ff850-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.603662 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8125efc3-988e-4689-acea-515119c3764f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.604877 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-scripts" (OuterVolumeSpecName: "scripts") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.608452 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: E0126 18:06:41.632316 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data podName:57e65f25-43dd-4baf-b2fa-7256dcbd452d nodeName:}" failed. No retries permitted until 2026-01-26 18:06:45.632286758 +0000 UTC m=+1374.339422891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data") pod "rabbitmq-server-0" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d") : configmap "rabbitmq-config-data" not found Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.610324 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8125efc3-988e-4689-acea-515119c3764f-kube-api-access-7h5z4" (OuterVolumeSpecName: "kube-api-access-7h5z4") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "kube-api-access-7h5z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.614562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-config" (OuterVolumeSpecName: "config") pod "0f8be55d-39c3-4ede-aff3-62890aa7c0e5" (UID: "0f8be55d-39c3-4ede-aff3-62890aa7c0e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.614675 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "0f8be55d-39c3-4ede-aff3-62890aa7c0e5" (UID: "0f8be55d-39c3-4ede-aff3-62890aa7c0e5"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.614698 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "0f8be55d-39c3-4ede-aff3-62890aa7c0e5" (UID: "0f8be55d-39c3-4ede-aff3-62890aa7c0e5"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.616322 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-config" (OuterVolumeSpecName: "config") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.635665 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.655060 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c068e38-f516-49f7-853a-a69b8f7d822d" path="/var/lib/kubelet/pods/0c068e38-f516-49f7-853a-a69b8f7d822d/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.655693 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1769639e-ba6d-4725-9012-91e6965c4cc0" path="/var/lib/kubelet/pods/1769639e-ba6d-4725-9012-91e6965c4cc0/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.656479 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.669558 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1973e32a-6474-4322-b96b-7ac80f7017cc" path="/var/lib/kubelet/pods/1973e32a-6474-4322-b96b-7ac80f7017cc/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.670687 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" path="/var/lib/kubelet/pods/2ac3cbab-86ff-4544-bf13-b1039585edbe/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.671864 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7" path="/var/lib/kubelet/pods/2b30f11e-a687-4fd3-a0d8-d8fd8b1c81d7/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.674363 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d82e638-a1aa-4e59-ad3f-34d6f1db8516" path="/var/lib/kubelet/pods/2d82e638-a1aa-4e59-ad3f-34d6f1db8516/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.675030 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dec399e-5b0d-4970-8e95-d17ec238f3a1" path="/var/lib/kubelet/pods/2dec399e-5b0d-4970-8e95-d17ec238f3a1/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.675640 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4d662c-4ce4-4a74-abb0-e751d736d531" path="/var/lib/kubelet/pods/5c4d662c-4ce4-4a74-abb0-e751d736d531/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.676980 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dfd1be0-7c5e-427e-8847-28e938c19844" path="/var/lib/kubelet/pods/6dfd1be0-7c5e-427e-8847-28e938c19844/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.677584 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86edc55e-eba7-4210-998f-92d7d7d5c18c" path="/var/lib/kubelet/pods/86edc55e-eba7-4210-998f-92d7d7d5c18c/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.678139 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57b7b82-aa02-4c9e-a8c4-8e21d44f627e" path="/var/lib/kubelet/pods/b57b7b82-aa02-4c9e-a8c4-8e21d44f627e/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.678750 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96eb47d-13fc-4a65-9fe6-292dda4b1fec" path="/var/lib/kubelet/pods/c96eb47d-13fc-4a65-9fe6-292dda4b1fec/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.679777 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf362037-b6e5-4dce-8da1-698fd75ff850" path="/var/lib/kubelet/pods/cf362037-b6e5-4dce-8da1-698fd75ff850/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.680436 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2123e1b-d624-44fa-9b74-b7379df16f9a" path="/var/lib/kubelet/pods/e2123e1b-d624-44fa-9b74-b7379df16f9a/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.681228 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1a1b8c-7e96-4505-91b3-5816d78a62e8" path="/var/lib/kubelet/pods/eb1a1b8c-7e96-4505-91b3-5816d78a62e8/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.682863 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f" path="/var/lib/kubelet/pods/fb74bd67-97b1-4f96-9f5d-6ebc4b14ab4f/volumes" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.686257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-kube-api-access-rtggw" (OuterVolumeSpecName: "kube-api-access-rtggw") pod "0f8be55d-39c3-4ede-aff3-62890aa7c0e5" (UID: "0f8be55d-39c3-4ede-aff3-62890aa7c0e5"). InnerVolumeSpecName "kube-api-access-rtggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.711381 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sjcd\" (UniqueName: \"kubernetes.io/projected/5ce1f24f-01df-4958-8d8d-29b46e248f3a-kube-api-access-9sjcd\") pod \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.711603 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce1f24f-01df-4958-8d8d-29b46e248f3a-operator-scripts\") pod \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\" (UID: \"5ce1f24f-01df-4958-8d8d-29b46e248f3a\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.715977 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce1f24f-01df-4958-8d8d-29b46e248f3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ce1f24f-01df-4958-8d8d-29b46e248f3a" (UID: "5ce1f24f-01df-4958-8d8d-29b46e248f3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717772 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8125efc3-988e-4689-acea-515119c3764f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717805 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717817 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717827 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h5z4\" (UniqueName: \"kubernetes.io/projected/8125efc3-988e-4689-acea-515119c3764f-kube-api-access-7h5z4\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717839 4787 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717858 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717867 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8125efc3-988e-4689-acea-515119c3764f-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717875 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717884 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce1f24f-01df-4958-8d8d-29b46e248f3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.717893 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtggw\" (UniqueName: \"kubernetes.io/projected/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-kube-api-access-rtggw\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.731460 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce1f24f-01df-4958-8d8d-29b46e248f3a-kube-api-access-9sjcd" (OuterVolumeSpecName: "kube-api-access-9sjcd") pod "5ce1f24f-01df-4958-8d8d-29b46e248f3a" (UID: "5ce1f24f-01df-4958-8d8d-29b46e248f3a"). InnerVolumeSpecName "kube-api-access-9sjcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.749986 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.780526 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.787078 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.814849 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f8be55d-39c3-4ede-aff3-62890aa7c0e5" (UID: "0f8be55d-39c3-4ede-aff3-62890aa7c0e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.819126 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-svc\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.819199 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.819291 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-nb\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.819495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-sb\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.819534 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shs4t\" (UniqueName: \"kubernetes.io/projected/890207cf-4cf3-4962-b5a1-19c076fbdeaa-kube-api-access-shs4t\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.819574 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-swift-storage-0\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.820121 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sjcd\" (UniqueName: \"kubernetes.io/projected/5ce1f24f-01df-4958-8d8d-29b46e248f3a-kube-api-access-9sjcd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.820140 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.820151 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.820163 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.824204 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:41 crc kubenswrapper[4787]: I0126 18:06:41.862229 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890207cf-4cf3-4962-b5a1-19c076fbdeaa-kube-api-access-shs4t" (OuterVolumeSpecName: "kube-api-access-shs4t") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa"). InnerVolumeSpecName "kube-api-access-shs4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.912168 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" event={"ID":"751af144-58aa-4770-b232-98816c3498d2","Type":"ContainerStarted","Data":"ecab774ea078770c0eed9dbeca178535ae496a6e89a0948cc72ec8dde29d91d5"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.923082 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10dd5fe1-04d0-4902-b12f-6fd0c592641f-operator-scripts\") pod \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.923226 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqnbd\" (UniqueName: \"kubernetes.io/projected/10dd5fe1-04d0-4902-b12f-6fd0c592641f-kube-api-access-rqnbd\") pod \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\" (UID: \"10dd5fe1-04d0-4902-b12f-6fd0c592641f\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.923842 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.923856 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shs4t\" (UniqueName: \"kubernetes.io/projected/890207cf-4cf3-4962-b5a1-19c076fbdeaa-kube-api-access-shs4t\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.924130 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10dd5fe1-04d0-4902-b12f-6fd0c592641f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10dd5fe1-04d0-4902-b12f-6fd0c592641f" (UID: "10dd5fe1-04d0-4902-b12f-6fd0c592641f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.944926 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8125efc3-988e-4689-acea-515119c3764f/ovsdbserver-nb/0.log" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.945054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8125efc3-988e-4689-acea-515119c3764f","Type":"ContainerDied","Data":"4dd4dbe51885917f88d5db2e6a4328592c065999c2e440e6f7522ec870225f26"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.945107 4787 scope.go:117] "RemoveContainer" containerID="c3b83431cb1aae7c46fbdf0e58b9cfa69783cc9a87a4bf1097cc0a7b9aad22e8" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.945240 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.950017 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10dd5fe1-04d0-4902-b12f-6fd0c592641f-kube-api-access-rqnbd" (OuterVolumeSpecName: "kube-api-access-rqnbd") pod "10dd5fe1-04d0-4902-b12f-6fd0c592641f" (UID: "10dd5fe1-04d0-4902-b12f-6fd0c592641f"). InnerVolumeSpecName "kube-api-access-rqnbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.957226 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerID="346eebb01a3f72fc52b37308e2a116af63d29c1e13d92d2e955b17ea8a8bd265" exitCode=143 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.957301 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-684775777b-sd52g" event={"ID":"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa","Type":"ContainerDied","Data":"346eebb01a3f72fc52b37308e2a116af63d29c1e13d92d2e955b17ea8a8bd265"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.969473 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8125efc3-988e-4689-acea-515119c3764f" (UID: "8125efc3-988e-4689-acea-515119c3764f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.972197 4787 generic.go:334] "Generic (PLEG): container finished" podID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerID="a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.972254 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c9405f-74d3-4282-90ac-0a9909a68b43","Type":"ContainerDied","Data":"a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.982144 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b570-account-create-update-8grdt" event={"ID":"5ce1f24f-01df-4958-8d8d-29b46e248f3a","Type":"ContainerDied","Data":"afd1c412a5d1e02ce224dd3a619c5d9371024d8e743c6aebd0f8d0bcfda748b4"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.982241 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b570-account-create-update-8grdt" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.993554 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2j4vw_0f8be55d-39c3-4ede-aff3-62890aa7c0e5/openstack-network-exporter/0.log" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.993643 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2j4vw" event={"ID":"0f8be55d-39c3-4ede-aff3-62890aa7c0e5","Type":"ContainerDied","Data":"fe04cfaae07824886ccc62159a830aa87b1ba061010afce352785d069624b629"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:41.993728 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2j4vw" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.002013 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545b-account-create-update-wbcdd" event={"ID":"add7dee2-b5b8-4a04-84fd-76f323e7e444","Type":"ContainerStarted","Data":"5c5116af3a4e5013c21e025f10c4f99920a4dd437d48e40f0ec7e7df3374d30c"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.025347 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10dd5fe1-04d0-4902-b12f-6fd0c592641f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.025378 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqnbd\" (UniqueName: \"kubernetes.io/projected/10dd5fe1-04d0-4902-b12f-6fd0c592641f-kube-api-access-rqnbd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.025392 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8125efc3-988e-4689-acea-515119c3764f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.030096 4787 generic.go:334] "Generic (PLEG): container finished" podID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerID="9087d36e049ad60d022d43c8eeebecb1463be62c5936b387b3dae05e8f883648" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.030124 4787 generic.go:334] "Generic (PLEG): container finished" podID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerID="f119de1c64759eab74ed8d3728129ed6c4867af79ba4028686238074915d9c03" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.030177 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dfdd88467-46gsc" event={"ID":"4b418d30-17a4-4cd9-a16d-c10b1f030492","Type":"ContainerDied","Data":"9087d36e049ad60d022d43c8eeebecb1463be62c5936b387b3dae05e8f883648"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.030202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dfdd88467-46gsc" event={"ID":"4b418d30-17a4-4cd9-a16d-c10b1f030492","Type":"ContainerDied","Data":"f119de1c64759eab74ed8d3728129ed6c4867af79ba4028686238074915d9c03"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.031939 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.035161 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.036707 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-99a8-account-create-update-82x75" event={"ID":"10dd5fe1-04d0-4902-b12f-6fd0c592641f","Type":"ContainerDied","Data":"cd26f656ccd4bfb3f85a26d7c8140940c273e7279f4eb42eb2a95de27f0b6ee8"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.036791 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-99a8-account-create-update-82x75" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.064241 4787 generic.go:334] "Generic (PLEG): container finished" podID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerID="7378f982f6cae9a52f12316a36e90af7a418c481c9a556fe677e423d6b63de51" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.064327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ef9888f-835c-40ea-9d3b-c7084cbffd65","Type":"ContainerDied","Data":"7378f982f6cae9a52f12316a36e90af7a418c481c9a556fe677e423d6b63de51"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.069826 4787 generic.go:334] "Generic (PLEG): container finished" podID="08858ab3-fd32-43dd-8002-bb2b01216237" containerID="8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.069897 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86f4885877-fz869" event={"ID":"08858ab3-fd32-43dd-8002-bb2b01216237","Type":"ContainerDied","Data":"8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.071197 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.072508 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0f8be55d-39c3-4ede-aff3-62890aa7c0e5" (UID: "0f8be55d-39c3-4ede-aff3-62890aa7c0e5"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.085357 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config podName:890207cf-4cf3-4962-b5a1-19c076fbdeaa nodeName:}" failed. No retries permitted until 2026-01-26 18:06:42.585327068 +0000 UTC m=+1371.292463251 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa") : error deleting /var/lib/kubelet/pods/890207cf-4cf3-4962-b5a1-19c076fbdeaa/volume-subpaths: remove /var/lib/kubelet/pods/890207cf-4cf3-4962-b5a1-19c076fbdeaa/volume-subpaths: no such file or directory Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.085706 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093372 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093404 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093416 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093426 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093490 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093521 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093534 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.093545 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.098656 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" event={"ID":"890207cf-4cf3-4962-b5a1-19c076fbdeaa","Type":"ContainerDied","Data":"ccb2e7541241846f9d05971f21e6b0d12ed7cd8993a39e48515683a4e3df279e"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.098756 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-zw55k" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.110579 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.113462 4787 generic.go:334] "Generic (PLEG): container finished" podID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerID="fe98b6784049128c409c7bb089303be8bb4a940ea14a70bee91761c87ae35d2b" exitCode=143 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.113547 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b64b47c9-scfhg" event={"ID":"1e181982-6b54-4a6f-a52c-eb025b767fb0","Type":"ContainerDied","Data":"fe98b6784049128c409c7bb089303be8bb4a940ea14a70bee91761c87ae35d2b"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.115578 4787 generic.go:334] "Generic (PLEG): container finished" podID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerID="bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110" exitCode=1 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.115597 4787 generic.go:334] "Generic (PLEG): container finished" podID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerID="832830a90028608c790b768d0aff195477684857f28e7be56072174f5075f085" exitCode=1 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.115652 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cc2d8" event={"ID":"e52ed138-0046-4bf6-b8f0-7bd5fb016f16","Type":"ContainerDied","Data":"bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.115677 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cc2d8" event={"ID":"e52ed138-0046-4bf6-b8f0-7bd5fb016f16","Type":"ContainerDied","Data":"832830a90028608c790b768d0aff195477684857f28e7be56072174f5075f085"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.116169 4787 scope.go:117] "RemoveContainer" containerID="832830a90028608c790b768d0aff195477684857f28e7be56072174f5075f085" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.116602 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-cc2d8_openstack(e52ed138-0046-4bf6-b8f0-7bd5fb016f16)\"" pod="openstack/root-account-create-update-cc2d8" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.124053 4787 generic.go:334] "Generic (PLEG): container finished" podID="6333914f-1303-43b8-ac9b-88c29e2bea64" containerID="769d9a6f91f28c2b54187f40ecdd34c5613915c3aa1b59893f5cdce81e1a0acf" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.124251 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6333914f-1303-43b8-ac9b-88c29e2bea64","Type":"ContainerDied","Data":"769d9a6f91f28c2b54187f40ecdd34c5613915c3aa1b59893f5cdce81e1a0acf"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.127012 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.127035 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8be55d-39c3-4ede-aff3-62890aa7c0e5-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.127046 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.127057 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.127066 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.227279 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.258196 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.317642 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333292 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kolla-config\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333385 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2cbz\" (UniqueName: \"kubernetes.io/projected/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kube-api-access-v2cbz\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333491 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw4s4\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-kube-api-access-xw4s4\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333532 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-etc-swift\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333566 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-generated\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333600 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-combined-ca-bundle\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333624 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-default\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333665 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333687 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-config-data\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333710 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-log-httpd\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333726 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-galera-tls-certs\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333749 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-internal-tls-certs\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333765 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-combined-ca-bundle\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333808 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-public-tls-certs\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333826 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-run-httpd\") pod \"4b418d30-17a4-4cd9-a16d-c10b1f030492\" (UID: \"4b418d30-17a4-4cd9-a16d-c10b1f030492\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.333852 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-operator-scripts\") pod \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\" (UID: \"7ef9888f-835c-40ea-9d3b-c7084cbffd65\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.335216 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.335923 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.336555 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-99a8-account-create-update-82x75"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.336596 4787 scope.go:117] "RemoveContainer" containerID="dce4e5453293ee450ee0ab6aaf6225e5f156c6dba0c7e6bdcccbe6c15fd75397" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.338553 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.341195 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.341566 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.343575 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.344916 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kube-api-access-v2cbz" (OuterVolumeSpecName: "kube-api-access-v2cbz") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "kube-api-access-v2cbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.354426 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.359546 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-99a8-account-create-update-82x75"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.379251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-kube-api-access-xw4s4" (OuterVolumeSpecName: "kube-api-access-xw4s4") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "kube-api-access-xw4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.392104 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b570-account-create-update-8grdt"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.415474 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.415528 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b570-account-create-update-8grdt"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.435231 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-config-data\") pod \"6333914f-1303-43b8-ac9b-88c29e2bea64\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.435272 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzjjd\" (UniqueName: \"kubernetes.io/projected/6333914f-1303-43b8-ac9b-88c29e2bea64-kube-api-access-jzjjd\") pod \"6333914f-1303-43b8-ac9b-88c29e2bea64\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.435323 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-combined-ca-bundle\") pod \"6333914f-1303-43b8-ac9b-88c29e2bea64\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.435394 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-nova-novncproxy-tls-certs\") pod \"6333914f-1303-43b8-ac9b-88c29e2bea64\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.435453 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-vencrypt-tls-certs\") pod \"6333914f-1303-43b8-ac9b-88c29e2bea64\" (UID: \"6333914f-1303-43b8-ac9b-88c29e2bea64\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436180 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436208 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436220 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436232 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b418d30-17a4-4cd9-a16d-c10b1f030492-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436243 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436252 4787 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436263 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2cbz\" (UniqueName: \"kubernetes.io/projected/7ef9888f-835c-40ea-9d3b-c7084cbffd65-kube-api-access-v2cbz\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436276 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw4s4\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-kube-api-access-xw4s4\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436286 4787 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b418d30-17a4-4cd9-a16d-c10b1f030492-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.436296 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ef9888f-835c-40ea-9d3b-c7084cbffd65-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.453412 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6333914f-1303-43b8-ac9b-88c29e2bea64-kube-api-access-jzjjd" (OuterVolumeSpecName: "kube-api-access-jzjjd") pod "6333914f-1303-43b8-ac9b-88c29e2bea64" (UID: "6333914f-1303-43b8-ac9b-88c29e2bea64"). InnerVolumeSpecName "kube-api-access-jzjjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.455096 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.477185 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.480915 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.485091 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.493873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.498001 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6333914f-1303-43b8-ac9b-88c29e2bea64" (UID: "6333914f-1303-43b8-ac9b-88c29e2bea64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.498328 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.501930 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-config-data" (OuterVolumeSpecName: "config-data") pod "4b418d30-17a4-4cd9-a16d-c10b1f030492" (UID: "4b418d30-17a4-4cd9-a16d-c10b1f030492"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.512231 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.520692 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-config-data" (OuterVolumeSpecName: "config-data") pod "6333914f-1303-43b8-ac9b-88c29e2bea64" (UID: "6333914f-1303-43b8-ac9b-88c29e2bea64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539797 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539842 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzjjd\" (UniqueName: \"kubernetes.io/projected/6333914f-1303-43b8-ac9b-88c29e2bea64-kube-api-access-jzjjd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539882 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539897 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539909 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539922 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.539930 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.540028 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.540044 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b418d30-17a4-4cd9-a16d-c10b1f030492-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.553695 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "7ef9888f-835c-40ea-9d3b-c7084cbffd65" (UID: "7ef9888f-835c-40ea-9d3b-c7084cbffd65"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.566684 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "6333914f-1303-43b8-ac9b-88c29e2bea64" (UID: "6333914f-1303-43b8-ac9b-88c29e2bea64"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.590051 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.590366 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-2j4vw"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.590956 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.590989 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.591905 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.591958 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.592124 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.593784 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:42.593826 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.598102 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-2j4vw"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.605961 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "6333914f-1303-43b8-ac9b-88c29e2bea64" (UID: "6333914f-1303-43b8-ac9b-88c29e2bea64"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.624596 4787 scope.go:117] "RemoveContainer" containerID="9304d3acaa85f83d020a39079016b3b206affa7cd56bed13c15d8a89719d6e20" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.641543 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config\") pod \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\" (UID: \"890207cf-4cf3-4962-b5a1-19c076fbdeaa\") " Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.642129 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config" (OuterVolumeSpecName: "config") pod "890207cf-4cf3-4962-b5a1-19c076fbdeaa" (UID: "890207cf-4cf3-4962-b5a1-19c076fbdeaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.642284 4787 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.642301 4787 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6333914f-1303-43b8-ac9b-88c29e2bea64-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.642310 4787 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ef9888f-835c-40ea-9d3b-c7084cbffd65-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.642318 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/890207cf-4cf3-4962-b5a1-19c076fbdeaa-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.663218 4787 scope.go:117] "RemoveContainer" containerID="1dc0bd2a4ecc7a68b77285c82ea23aa2dabc9c62ebde2984da45e41acbf3b11c" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.685028 4787 scope.go:117] "RemoveContainer" containerID="1bf368f648b16f9080c130c5f979912ccf94597615e98ae4c78c8f74cbb65e2d" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.736004 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.736348 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-central-agent" containerID="cri-o://85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.736456 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="sg-core" containerID="cri-o://75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.736525 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-notification-agent" containerID="cri-o://33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.736664 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="proxy-httpd" containerID="cri-o://2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.832746 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.833106 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ca591aea-a146-4b51-887e-9688a249fdad" containerName="kube-state-metrics" containerID="cri-o://43961d5d06f3ba0dea16f2a4bbb78bb16b8ea8ace60a31feebf82fac2516b093" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.883262 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-zw55k"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.888582 4787 scope.go:117] "RemoveContainer" containerID="92ee99ec8f96ad8d93512ba959ebd6a2d327963c172a5fa70335f6edb958771d" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.909855 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-zw55k"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.946962 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.947178 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="28ba670d-d2d7-47aa-bc54-6da4d0e532f3" containerName="memcached" containerID="cri-o://cacee73bf45c04d60820e3ca12199d2fcac4ad380185a4c29165ce46e0b6bc52" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:42.972897 4787 scope.go:117] "RemoveContainer" containerID="bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.008139 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.009133 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.017502 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b1b9-account-create-update-7ghm2"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.017913 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.017994 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="41cca919-781a-48fb-99c1-ec7ebbb7c601" containerName="nova-cell0-conductor-conductor" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.046715 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b1b9-account-create-update-7ghm2"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.056837 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b1b9-account-create-update-2zlhz"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057288 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerName="dnsmasq-dns" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057304 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerName="dnsmasq-dns" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057322 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6333914f-1303-43b8-ac9b-88c29e2bea64" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057328 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6333914f-1303-43b8-ac9b-88c29e2bea64" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057339 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="mysql-bootstrap" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057345 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="mysql-bootstrap" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057355 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-server" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057361 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-server" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057374 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="ovsdbserver-sb" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057380 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="ovsdbserver-sb" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057395 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8be55d-39c3-4ede-aff3-62890aa7c0e5" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057400 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8be55d-39c3-4ede-aff3-62890aa7c0e5" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057422 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-httpd" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057427 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-httpd" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057437 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="ovsdbserver-nb" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057443 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="ovsdbserver-nb" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057465 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerName="init" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057471 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerName="init" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057482 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057487 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057495 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="galera" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057500 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="galera" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.057510 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057515 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057683 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6333914f-1303-43b8-ac9b-88c29e2bea64" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057696 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="ovsdbserver-sb" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057706 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-httpd" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057715 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="ovsdbserver-nb" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057724 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8125efc3-988e-4689-acea-515119c3764f" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057741 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" containerName="galera" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057750 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" containerName="dnsmasq-dns" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057761 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8be55d-39c3-4ede-aff3-62890aa7c0e5" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057769 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" containerName="proxy-server" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.057779 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac3cbab-86ff-4544-bf13-b1039585edbe" containerName="openstack-network-exporter" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.058649 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.060413 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.062942 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b1b9-account-create-update-2zlhz"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.070693 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-v8c8m"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.088325 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-v8c8m"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.106686 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x45gw"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.115873 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x45gw"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.124861 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.139913 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-76b5d4b8cb-wb8cc"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.140168 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-76b5d4b8cb-wb8cc" podUID="d85cc53a-0132-4491-82a1-056badced30c" containerName="keystone-api" containerID="cri-o://0b9197c0a132a15fde8443730e1e6afbd7356943bd0400304fb06f44082983a1" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.153358 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9664z"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.163921 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9664z"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.178296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.178321 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da1e593_c678_4e37_9471_f261e82e004c.slice/crio-9a46126017130bcb88ed35c15b2b1eb4c16a399a072a641f98f81df2958d8bcc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f8a7212_44ab_42f1_86be_8b79726fe4f8.slice/crio-conmon-c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f8a7212_44ab_42f1_86be_8b79726fe4f8.slice/crio-c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod890207cf_4cf3_4962_b5a1_19c076fbdeaa.slice\": RecentStats: unable to find data in memory cache]" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.178468 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrqj\" (UniqueName: \"kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.182996 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b1b9-account-create-update-2zlhz"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.196631 4787 generic.go:334] "Generic (PLEG): container finished" podID="f43c375f-1176-442e-98fd-5d9acba6e199" containerID="bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.196724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f43c375f-1176-442e-98fd-5d9acba6e199","Type":"ContainerDied","Data":"bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.200490 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cc2d8"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.200868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ef9888f-835c-40ea-9d3b-c7084cbffd65","Type":"ContainerDied","Data":"1711b2bbcf17b91b0159c6b600a6f474a2f74caef1cd3917c38e44ee5a7ad3b4"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.200984 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.218920 4787 generic.go:334] "Generic (PLEG): container finished" podID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerID="75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c" exitCode=2 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.219098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerDied","Data":"75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c"} Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.220597 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e is running failed: container process not found" containerID="bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.221208 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e is running failed: container process not found" containerID="bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.222066 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e is running failed: container process not found" containerID="bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.222101 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f43c375f-1176-442e-98fd-5d9acba6e199" containerName="nova-scheduler-scheduler" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.224909 4787 generic.go:334] "Generic (PLEG): container finished" podID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerID="c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf" exitCode=0 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.224982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b7546858-x7nbv" event={"ID":"3f8a7212-44ab-42f1-86be-8b79726fe4f8","Type":"ContainerDied","Data":"c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.228040 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dfdd88467-46gsc" event={"ID":"4b418d30-17a4-4cd9-a16d-c10b1f030492","Type":"ContainerDied","Data":"b5e1a7fb7bf10043d74490bc811cff07cff7938880445e0c2ec067feb074ccee"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.229052 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dfdd88467-46gsc" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.231050 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7rrqj operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-b1b9-account-create-update-2zlhz" podUID="b7c7197d-6ffc-4db0-8074-ff1782c9ce7a" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.242233 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.245010 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca591aea-a146-4b51-887e-9688a249fdad" containerID="43961d5d06f3ba0dea16f2a4bbb78bb16b8ea8ace60a31feebf82fac2516b093" exitCode=2 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.245110 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ca591aea-a146-4b51-887e-9688a249fdad","Type":"ContainerDied","Data":"43961d5d06f3ba0dea16f2a4bbb78bb16b8ea8ace60a31feebf82fac2516b093"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.253914 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.261511 4787 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-cc2d8" secret="" err="secret \"galera-openstack-dockercfg-s8jwd\" not found" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.261587 4787 scope.go:117] "RemoveContainer" containerID="832830a90028608c790b768d0aff195477684857f28e7be56072174f5075f085" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.261908 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.262089 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-cc2d8_openstack(e52ed138-0046-4bf6-b8f0-7bd5fb016f16)\"" pod="openstack/root-account-create-update-cc2d8" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.262579 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6333914f-1303-43b8-ac9b-88c29e2bea64","Type":"ContainerDied","Data":"7f0f6e3fc225bc0f915b35abdf948a03abc7edb2914dd5a8f9893401bd85440c"} Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.268820 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:41992->10.217.0.164:8776: read: connection reset by peer" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.281244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrqj\" (UniqueName: \"kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.281762 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.281982 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.282076 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.782054349 +0000 UTC m=+1372.489190492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.299330 4787 projected.go:194] Error preparing data for projected volume kube-api-access-7rrqj for pod openstack/keystone-b1b9-account-create-update-2zlhz: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.299461 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.79943906 +0000 UTC m=+1372.506575193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7rrqj" (UniqueName: "kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.313010 4787 scope.go:117] "RemoveContainer" containerID="bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.321152 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110\": container with ID starting with bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110 not found: ID does not exist" containerID="bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.321231 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110"} err="failed to get container status \"bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110\": rpc error: code = NotFound desc = could not find container \"bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110\": container with ID starting with bac9ff138e165b3a6a6147a4dde9235a516d3d37c4f35b38f751ee787c2dc110 not found: ID does not exist" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.321290 4787 scope.go:117] "RemoveContainer" containerID="7378f982f6cae9a52f12316a36e90af7a418c481c9a556fe677e423d6b63de51" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383406 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383509 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts podName:e52ed138-0046-4bf6-b8f0-7bd5fb016f16 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:43.883483793 +0000 UTC m=+1372.590619976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts") pod "root-account-create-update-cc2d8" (UID: "e52ed138-0046-4bf6-b8f0-7bd5fb016f16") : configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383574 4787 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383642 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:47.383627167 +0000 UTC m=+1376.090763300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scheduler-config-data" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383676 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383696 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data podName:bb1abb80-0591-49c7-b549-969066392a5a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:47.383691119 +0000 UTC m=+1376.090827252 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data") pod "rabbitmq-cell1-server-0" (UID: "bb1abb80-0591-49c7-b549-969066392a5a") : configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383731 4787 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383750 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:47.38374462 +0000 UTC m=+1376.090880753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383780 4787 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.383795 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:47.383790731 +0000 UTC m=+1376.090926864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-config-data" not found Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.468713 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="galera" containerID="cri-o://b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe" gracePeriod=30 Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.673216 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8be55d-39c3-4ede-aff3-62890aa7c0e5" path="/var/lib/kubelet/pods/0f8be55d-39c3-4ede-aff3-62890aa7c0e5/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.677004 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:49298->10.217.0.200:8775: read: connection reset by peer" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.677061 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10dd5fe1-04d0-4902-b12f-6fd0c592641f" path="/var/lib/kubelet/pods/10dd5fe1-04d0-4902-b12f-6fd0c592641f/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.677643 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc058b4-5013-4418-ba61-1d9f98b624af" path="/var/lib/kubelet/pods/1cc058b4-5013-4418-ba61-1d9f98b624af/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.678051 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:49306->10.217.0.200:8775: read: connection reset by peer" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.678532 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acdb479-1c97-4508-b835-b04a0b0aa436" path="/var/lib/kubelet/pods/2acdb479-1c97-4508-b835-b04a0b0aa436/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.699335 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4577a0-e573-4acc-9d34-0b42da7381f8" path="/var/lib/kubelet/pods/2d4577a0-e573-4acc-9d34-0b42da7381f8/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.700352 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce1f24f-01df-4958-8d8d-29b46e248f3a" path="/var/lib/kubelet/pods/5ce1f24f-01df-4958-8d8d-29b46e248f3a/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.700932 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef9888f-835c-40ea-9d3b-c7084cbffd65" path="/var/lib/kubelet/pods/7ef9888f-835c-40ea-9d3b-c7084cbffd65/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.705508 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.709209 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8125efc3-988e-4689-acea-515119c3764f" path="/var/lib/kubelet/pods/8125efc3-988e-4689-acea-515119c3764f/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.720203 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890207cf-4cf3-4962-b5a1-19c076fbdeaa" path="/var/lib/kubelet/pods/890207cf-4cf3-4962-b5a1-19c076fbdeaa/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.720960 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5fb4a7-fca6-412e-819e-dac6667d92d6" path="/var/lib/kubelet/pods/dc5fb4a7-fca6-412e-819e-dac6667d92d6/volumes" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.729080 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5dfdd88467-46gsc"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.764294 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5dfdd88467-46gsc"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.796379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.796626 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.796681 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:44.796662059 +0000 UTC m=+1373.503798192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.809057 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.840876 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.898210 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrqj\" (UniqueName: \"kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.898653 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.898697 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts podName:e52ed138-0046-4bf6-b8f0-7bd5fb016f16 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:44.898684367 +0000 UTC m=+1373.605820500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts") pod "root-account-create-update-cc2d8" (UID: "e52ed138-0046-4bf6-b8f0-7bd5fb016f16") : configmap "openstack-scripts" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.904509 4787 projected.go:194] Error preparing data for projected volume kube-api-access-7rrqj for pod openstack/keystone-b1b9-account-create-update-2zlhz: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 26 18:06:43 crc kubenswrapper[4787]: E0126 18:06:43.904601 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:44.904583707 +0000 UTC m=+1373.611719830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7rrqj" (UniqueName: "kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.908565 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-684775777b-sd52g" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:55354->10.217.0.159:9311: read: connection reset by peer" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.908595 4787 scope.go:117] "RemoveContainer" containerID="64bceda317e097fc87337cc8e59ab97a25c19ff22286cd1844866b2f8f52e65a" Jan 26 18:06:43 crc kubenswrapper[4787]: I0126 18:06:43.908568 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-684775777b-sd52g" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:55370->10.217.0.159:9311: read: connection reset by peer" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.075521 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.285028 4787 generic.go:334] "Generic (PLEG): container finished" podID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerID="96b6dc83a1412919436b7c09b8eb24e55f4c4ec160f6f0f178b289df71a6d06a" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.285110 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b64b47c9-scfhg" event={"ID":"1e181982-6b54-4a6f-a52c-eb025b767fb0","Type":"ContainerDied","Data":"96b6dc83a1412919436b7c09b8eb24e55f4c4ec160f6f0f178b289df71a6d06a"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.317691 4787 generic.go:334] "Generic (PLEG): container finished" podID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerID="ebda6eab466c6a4894926fb8d2902783e462f0d182221a273f361e590df56e44" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.317763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"426dd23d-ce8f-4f72-aece-79585de1cef1","Type":"ContainerDied","Data":"ebda6eab466c6a4894926fb8d2902783e462f0d182221a273f361e590df56e44"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.324892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" event={"ID":"751af144-58aa-4770-b232-98816c3498d2","Type":"ContainerDied","Data":"ecab774ea078770c0eed9dbeca178535ae496a6e89a0948cc72ec8dde29d91d5"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.324938 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecab774ea078770c0eed9dbeca178535ae496a6e89a0948cc72ec8dde29d91d5" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.328575 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-558e-account-create-update-b4hcn" event={"ID":"df4a77a8-4971-432f-81db-ac6be78f24a0","Type":"ContainerDied","Data":"6341c3d9da1068129c2e5e715f5cd800742640d1faf7c6b8133c93aea6557ad0"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.328665 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6341c3d9da1068129c2e5e715f5cd800742640d1faf7c6b8133c93aea6557ad0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.331576 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerID="5b93e665a1925c67d4a5127df1172c0fcd71ee7863b68ee1361c8117a65ecb61" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.331610 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-684775777b-sd52g" event={"ID":"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa","Type":"ContainerDied","Data":"5b93e665a1925c67d4a5127df1172c0fcd71ee7863b68ee1361c8117a65ecb61"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.338167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ca591aea-a146-4b51-887e-9688a249fdad","Type":"ContainerDied","Data":"b8bb793fedf2bb76e6ea3e8223fd7eb0f2cb130108dd45414388e1e84a58afdd"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.338253 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8bb793fedf2bb76e6ea3e8223fd7eb0f2cb130108dd45414388e1e84a58afdd" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.341135 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" event={"ID":"c9b36455-e624-4719-b164-29f4afacfb4d","Type":"ContainerDied","Data":"8145188e3fa3588b15b88579918745d5400b11e460ccc129029ee3a31fc31aeb"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.341203 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8145188e3fa3588b15b88579918745d5400b11e460ccc129029ee3a31fc31aeb" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.346457 4787 generic.go:334] "Generic (PLEG): container finished" podID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerID="e8e42b27e122fd736da88e5dda90848d4ff33a360274bad85a5080141095b729" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.346536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" event={"ID":"c27796ea-3db5-42ad-8b22-e4d774e28578","Type":"ContainerDied","Data":"e8e42b27e122fd736da88e5dda90848d4ff33a360274bad85a5080141095b729"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.348413 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b7546858-x7nbv" event={"ID":"3f8a7212-44ab-42f1-86be-8b79726fe4f8","Type":"ContainerDied","Data":"80d5fd9a44e0d66e217601733f94e5a55f13c2b2e82b16d5445ad717f3c7df9b"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.348439 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80d5fd9a44e0d66e217601733f94e5a55f13c2b2e82b16d5445ad717f3c7df9b" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.351045 4787 generic.go:334] "Generic (PLEG): container finished" podID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerID="ef0bc42b7bd2dab6b5fbabd59a7d254961bab025f7081c2b30ff94990add57bc" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.351120 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"87c0c3c8-9282-45e5-b376-9c335e24573a","Type":"ContainerDied","Data":"ef0bc42b7bd2dab6b5fbabd59a7d254961bab025f7081c2b30ff94990add57bc"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.354899 4787 generic.go:334] "Generic (PLEG): container finished" podID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerID="2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.354930 4787 generic.go:334] "Generic (PLEG): container finished" podID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerID="85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.354980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerDied","Data":"2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.355023 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerDied","Data":"85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.360529 4787 generic.go:334] "Generic (PLEG): container finished" podID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerID="011bedc70e267a9340d9ea488ce83a6f2966f96cd9f36e47bd7028368ceb1135" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.360644 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a","Type":"ContainerDied","Data":"011bedc70e267a9340d9ea488ce83a6f2966f96cd9f36e47bd7028368ceb1135"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.368470 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545b-account-create-update-wbcdd" event={"ID":"add7dee2-b5b8-4a04-84fd-76f323e7e444","Type":"ContainerDied","Data":"5c5116af3a4e5013c21e025f10c4f99920a4dd437d48e40f0ec7e7df3374d30c"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.368542 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c5116af3a4e5013c21e025f10c4f99920a4dd437d48e40f0ec7e7df3374d30c" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.379661 4787 generic.go:334] "Generic (PLEG): container finished" podID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerID="fb5b68e300042cdeab819b0943f78da3bc13f53fb414daf36d4826013aa7717e" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.379762 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44392b57-bc6b-4a8b-8ff3-346fab2422af","Type":"ContainerDied","Data":"fb5b68e300042cdeab819b0943f78da3bc13f53fb414daf36d4826013aa7717e"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.379793 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44392b57-bc6b-4a8b-8ff3-346fab2422af","Type":"ContainerDied","Data":"b0287cc48fe8d9b4345591edc1a2e6fd0c5bb984ec158af735fff33eb8d4962d"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.379807 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0287cc48fe8d9b4345591edc1a2e6fd0c5bb984ec158af735fff33eb8d4962d" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.388739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f43c375f-1176-442e-98fd-5d9acba6e199","Type":"ContainerDied","Data":"00f01706a8da327e7bf646ef73decd923cfad633a525671c58d7586f9030a44e"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.388787 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f01706a8da327e7bf646ef73decd923cfad633a525671c58d7586f9030a44e" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.389508 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.390891 4787 generic.go:334] "Generic (PLEG): container finished" podID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerID="822ecb7ff1646f9b506cca634869b822b27bb00a188755e74a6e4b48e2ab3e95" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.390966 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12e77a01-1165-4f0a-ad35-fe127b5ae6c0","Type":"ContainerDied","Data":"822ecb7ff1646f9b506cca634869b822b27bb00a188755e74a6e4b48e2ab3e95"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.402487 4787 generic.go:334] "Generic (PLEG): container finished" podID="28ba670d-d2d7-47aa-bc54-6da4d0e532f3" containerID="cacee73bf45c04d60820e3ca12199d2fcac4ad380185a4c29165ce46e0b6bc52" exitCode=0 Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.402590 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.402694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28ba670d-d2d7-47aa-bc54-6da4d0e532f3","Type":"ContainerDied","Data":"cacee73bf45c04d60820e3ca12199d2fcac4ad380185a4c29165ce46e0b6bc52"} Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.424760 4787 scope.go:117] "RemoveContainer" containerID="9087d36e049ad60d022d43c8eeebecb1463be62c5936b387b3dae05e8f883648" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.428322 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.483909 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.494519 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.497863 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.497905 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" containerName="nova-cell1-conductor-conductor" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.516661 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.520590 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-config\") pod \"ca591aea-a146-4b51-887e-9688a249fdad\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.520764 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kkfm\" (UniqueName: \"kubernetes.io/projected/ca591aea-a146-4b51-887e-9688a249fdad-kube-api-access-7kkfm\") pod \"ca591aea-a146-4b51-887e-9688a249fdad\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.520849 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b36455-e624-4719-b164-29f4afacfb4d-operator-scripts\") pod \"c9b36455-e624-4719-b164-29f4afacfb4d\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.520923 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-certs\") pod \"ca591aea-a146-4b51-887e-9688a249fdad\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.521004 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8v5k\" (UniqueName: \"kubernetes.io/projected/c9b36455-e624-4719-b164-29f4afacfb4d-kube-api-access-l8v5k\") pod \"c9b36455-e624-4719-b164-29f4afacfb4d\" (UID: \"c9b36455-e624-4719-b164-29f4afacfb4d\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.521100 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-combined-ca-bundle\") pod \"ca591aea-a146-4b51-887e-9688a249fdad\" (UID: \"ca591aea-a146-4b51-887e-9688a249fdad\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.521615 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b36455-e624-4719-b164-29f4afacfb4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9b36455-e624-4719-b164-29f4afacfb4d" (UID: "c9b36455-e624-4719-b164-29f4afacfb4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.522185 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b36455-e624-4719-b164-29f4afacfb4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.528811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca591aea-a146-4b51-887e-9688a249fdad-kube-api-access-7kkfm" (OuterVolumeSpecName: "kube-api-access-7kkfm") pod "ca591aea-a146-4b51-887e-9688a249fdad" (UID: "ca591aea-a146-4b51-887e-9688a249fdad"). InnerVolumeSpecName "kube-api-access-7kkfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.529978 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.531698 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b36455-e624-4719-b164-29f4afacfb4d-kube-api-access-l8v5k" (OuterVolumeSpecName: "kube-api-access-l8v5k") pod "c9b36455-e624-4719-b164-29f4afacfb4d" (UID: "c9b36455-e624-4719-b164-29f4afacfb4d"). InnerVolumeSpecName "kube-api-access-l8v5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.555053 4787 scope.go:117] "RemoveContainer" containerID="f119de1c64759eab74ed8d3728129ed6c4867af79ba4028686238074915d9c03" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.562379 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.573933 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca591aea-a146-4b51-887e-9688a249fdad" (UID: "ca591aea-a146-4b51-887e-9688a249fdad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.574470 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.574457 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "ca591aea-a146-4b51-887e-9688a249fdad" (UID: "ca591aea-a146-4b51-887e-9688a249fdad"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.599385 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.622893 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4a77a8-4971-432f-81db-ac6be78f24a0-operator-scripts\") pod \"df4a77a8-4971-432f-81db-ac6be78f24a0\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.622971 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq4f9\" (UniqueName: \"kubernetes.io/projected/df4a77a8-4971-432f-81db-ac6be78f24a0-kube-api-access-qq4f9\") pod \"df4a77a8-4971-432f-81db-ac6be78f24a0\" (UID: \"df4a77a8-4971-432f-81db-ac6be78f24a0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623021 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-internal-tls-certs\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623086 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-combined-ca-bundle\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623108 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft289\" (UniqueName: \"kubernetes.io/projected/3f8a7212-44ab-42f1-86be-8b79726fe4f8-kube-api-access-ft289\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623142 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-public-tls-certs\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623172 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-config-data\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623193 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8a7212-44ab-42f1-86be-8b79726fe4f8-logs\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623404 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-scripts\") pod \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\" (UID: \"3f8a7212-44ab-42f1-86be-8b79726fe4f8\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623876 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8v5k\" (UniqueName: \"kubernetes.io/projected/c9b36455-e624-4719-b164-29f4afacfb4d-kube-api-access-l8v5k\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623906 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623924 4787 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.623938 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kkfm\" (UniqueName: \"kubernetes.io/projected/ca591aea-a146-4b51-887e-9688a249fdad-kube-api-access-7kkfm\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.633416 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-scripts" (OuterVolumeSpecName: "scripts") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.633479 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4a77a8-4971-432f-81db-ac6be78f24a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df4a77a8-4971-432f-81db-ac6be78f24a0" (UID: "df4a77a8-4971-432f-81db-ac6be78f24a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.633566 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8a7212-44ab-42f1-86be-8b79726fe4f8-logs" (OuterVolumeSpecName: "logs") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.640338 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4a77a8-4971-432f-81db-ac6be78f24a0-kube-api-access-qq4f9" (OuterVolumeSpecName: "kube-api-access-qq4f9") pod "df4a77a8-4971-432f-81db-ac6be78f24a0" (UID: "df4a77a8-4971-432f-81db-ac6be78f24a0"). InnerVolumeSpecName "kube-api-access-qq4f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.650599 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8a7212-44ab-42f1-86be-8b79726fe4f8-kube-api-access-ft289" (OuterVolumeSpecName: "kube-api-access-ft289") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "kube-api-access-ft289". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.651913 4787 scope.go:117] "RemoveContainer" containerID="769d9a6f91f28c2b54187f40ecdd34c5613915c3aa1b59893f5cdce81e1a0acf" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.682077 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "ca591aea-a146-4b51-887e-9688a249fdad" (UID: "ca591aea-a146-4b51-887e-9688a249fdad"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.691823 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.695710 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.705761 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.724613 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2dk\" (UniqueName: \"kubernetes.io/projected/f43c375f-1176-442e-98fd-5d9acba6e199-kube-api-access-hx2dk\") pod \"f43c375f-1176-442e-98fd-5d9acba6e199\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.724751 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/751af144-58aa-4770-b232-98816c3498d2-operator-scripts\") pod \"751af144-58aa-4770-b232-98816c3498d2\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.724802 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8rs9\" (UniqueName: \"kubernetes.io/projected/add7dee2-b5b8-4a04-84fd-76f323e7e444-kube-api-access-c8rs9\") pod \"add7dee2-b5b8-4a04-84fd-76f323e7e444\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.724829 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-config-data\") pod \"f43c375f-1176-442e-98fd-5d9acba6e199\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.724862 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add7dee2-b5b8-4a04-84fd-76f323e7e444-operator-scripts\") pod \"add7dee2-b5b8-4a04-84fd-76f323e7e444\" (UID: \"add7dee2-b5b8-4a04-84fd-76f323e7e444\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.724985 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-combined-ca-bundle\") pod \"f43c375f-1176-442e-98fd-5d9acba6e199\" (UID: \"f43c375f-1176-442e-98fd-5d9acba6e199\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.725032 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5njg\" (UniqueName: \"kubernetes.io/projected/751af144-58aa-4770-b232-98816c3498d2-kube-api-access-d5njg\") pod \"751af144-58aa-4770-b232-98816c3498d2\" (UID: \"751af144-58aa-4770-b232-98816c3498d2\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.725382 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751af144-58aa-4770-b232-98816c3498d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "751af144-58aa-4770-b232-98816c3498d2" (UID: "751af144-58aa-4770-b232-98816c3498d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726313 4787 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca591aea-a146-4b51-887e-9688a249fdad-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726364 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726380 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4a77a8-4971-432f-81db-ac6be78f24a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726394 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq4f9\" (UniqueName: \"kubernetes.io/projected/df4a77a8-4971-432f-81db-ac6be78f24a0-kube-api-access-qq4f9\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726433 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/751af144-58aa-4770-b232-98816c3498d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726449 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft289\" (UniqueName: \"kubernetes.io/projected/3f8a7212-44ab-42f1-86be-8b79726fe4f8-kube-api-access-ft289\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.726462 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8a7212-44ab-42f1-86be-8b79726fe4f8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.730221 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add7dee2-b5b8-4a04-84fd-76f323e7e444-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "add7dee2-b5b8-4a04-84fd-76f323e7e444" (UID: "add7dee2-b5b8-4a04-84fd-76f323e7e444"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.731071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43c375f-1176-442e-98fd-5d9acba6e199-kube-api-access-hx2dk" (OuterVolumeSpecName: "kube-api-access-hx2dk") pod "f43c375f-1176-442e-98fd-5d9acba6e199" (UID: "f43c375f-1176-442e-98fd-5d9acba6e199"). InnerVolumeSpecName "kube-api-access-hx2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.732479 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.745007 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.752191 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751af144-58aa-4770-b232-98816c3498d2-kube-api-access-d5njg" (OuterVolumeSpecName: "kube-api-access-d5njg") pod "751af144-58aa-4770-b232-98816c3498d2" (UID: "751af144-58aa-4770-b232-98816c3498d2"). InnerVolumeSpecName "kube-api-access-d5njg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.752457 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.756906 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add7dee2-b5b8-4a04-84fd-76f323e7e444-kube-api-access-c8rs9" (OuterVolumeSpecName: "kube-api-access-c8rs9") pod "add7dee2-b5b8-4a04-84fd-76f323e7e444" (UID: "add7dee2-b5b8-4a04-84fd-76f323e7e444"). InnerVolumeSpecName "kube-api-access-c8rs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.762484 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.763371 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-config-data" (OuterVolumeSpecName: "config-data") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.794209 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f43c375f-1176-442e-98fd-5d9acba6e199" (UID: "f43c375f-1176-442e-98fd-5d9acba6e199"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.796054 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.807242 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-config-data" (OuterVolumeSpecName: "config-data") pod "f43c375f-1176-442e-98fd-5d9acba6e199" (UID: "f43c375f-1176-442e-98fd-5d9acba6e199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.831991 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-httpd-run\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832263 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-config-data\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832362 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data-custom\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832472 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-public-tls-certs\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832557 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832644 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-logs\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832772 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-internal-tls-certs\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-combined-ca-bundle\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.832994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833108 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-logs\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833211 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-internal-tls-certs\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833318 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-config-data\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833415 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-httpd-run\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833507 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-scripts\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833616 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28cf7\" (UniqueName: \"kubernetes.io/projected/44392b57-bc6b-4a8b-8ff3-346fab2422af-kube-api-access-28cf7\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.833757 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-scripts\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836138 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-public-tls-certs\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836289 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjrzk\" (UniqueName: \"kubernetes.io/projected/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-kube-api-access-hjrzk\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836401 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-combined-ca-bundle\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836521 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-scripts\") pod \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\" (UID: \"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836622 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44392b57-bc6b-4a8b-8ff3-346fab2422af-etc-machine-id\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836722 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-combined-ca-bundle\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.836932 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44392b57-bc6b-4a8b-8ff3-346fab2422af-logs\") pod \"44392b57-bc6b-4a8b-8ff3-346fab2422af\" (UID: \"44392b57-bc6b-4a8b-8ff3-346fab2422af\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.837057 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzr4\" (UniqueName: \"kubernetes.io/projected/87c0c3c8-9282-45e5-b376-9c335e24573a-kube-api-access-kmzr4\") pod \"87c0c3c8-9282-45e5-b376-9c335e24573a\" (UID: \"87c0c3c8-9282-45e5-b376-9c335e24573a\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.837167 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838213 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838444 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838556 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838644 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5njg\" (UniqueName: \"kubernetes.io/projected/751af144-58aa-4770-b232-98816c3498d2-kube-api-access-d5njg\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838728 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838831 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2dk\" (UniqueName: \"kubernetes.io/projected/f43c375f-1176-442e-98fd-5d9acba6e199-kube-api-access-hx2dk\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.838913 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8rs9\" (UniqueName: \"kubernetes.io/projected/add7dee2-b5b8-4a04-84fd-76f323e7e444-kube-api-access-c8rs9\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.839037 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43c375f-1176-442e-98fd-5d9acba6e199-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.839147 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add7dee2-b5b8-4a04-84fd-76f323e7e444-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.839234 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.839742 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.840227 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:46.840193047 +0000 UTC m=+1375.547329240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : configmap "openstack-scripts" not found Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.842702 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-logs" (OuterVolumeSpecName: "logs") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.842823 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.842874 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44392b57-bc6b-4a8b-8ff3-346fab2422af-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.845286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44392b57-bc6b-4a8b-8ff3-346fab2422af-logs" (OuterVolumeSpecName: "logs") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.848604 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-logs" (OuterVolumeSpecName: "logs") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.849441 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.861939 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c0c3c8-9282-45e5-b376-9c335e24573a-kube-api-access-kmzr4" (OuterVolumeSpecName: "kube-api-access-kmzr4") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "kube-api-access-kmzr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.862298 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.863189 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-scripts" (OuterVolumeSpecName: "scripts") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.864017 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-kube-api-access-hjrzk" (OuterVolumeSpecName: "kube-api-access-hjrzk") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "kube-api-access-hjrzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.867109 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44392b57-bc6b-4a8b-8ff3-346fab2422af-kube-api-access-28cf7" (OuterVolumeSpecName: "kube-api-access-28cf7") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "kube-api-access-28cf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.874644 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.879823 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-scripts" (OuterVolumeSpecName: "scripts") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.895103 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-scripts" (OuterVolumeSpecName: "scripts") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.937174 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.939955 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.940103 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shmsw\" (UniqueName: \"kubernetes.io/projected/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-kube-api-access-shmsw\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.940229 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-memcached-tls-certs\") pod \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.940326 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-config-data\") pod \"426dd23d-ce8f-4f72-aece-79585de1cef1\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.940417 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-combined-ca-bundle\") pod \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.940923 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-logs\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.941122 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-combined-ca-bundle\") pod \"426dd23d-ce8f-4f72-aece-79585de1cef1\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.941229 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426dd23d-ce8f-4f72-aece-79585de1cef1-logs\") pod \"426dd23d-ce8f-4f72-aece-79585de1cef1\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.941886 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-public-tls-certs\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942067 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-internal-tls-certs\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kolla-config\") pod \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942322 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-config-data\") pod \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-nova-metadata-tls-certs\") pod \"426dd23d-ce8f-4f72-aece-79585de1cef1\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942615 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prt5p\" (UniqueName: \"kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p\") pod \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942861 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd59l\" (UniqueName: \"kubernetes.io/projected/426dd23d-ce8f-4f72-aece-79585de1cef1-kube-api-access-cd59l\") pod \"426dd23d-ce8f-4f72-aece-79585de1cef1\" (UID: \"426dd23d-ce8f-4f72-aece-79585de1cef1\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.942972 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-config-data\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.943399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrqj\" (UniqueName: \"kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.943690 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.943773 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjrzk\" (UniqueName: \"kubernetes.io/projected/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-kube-api-access-hjrzk\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.943844 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.943908 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44392b57-bc6b-4a8b-8ff3-346fab2422af-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944175 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44392b57-bc6b-4a8b-8ff3-346fab2422af-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944238 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzr4\" (UniqueName: \"kubernetes.io/projected/87c0c3c8-9282-45e5-b376-9c335e24573a-kube-api-access-kmzr4\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944292 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944341 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944407 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944459 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c0c3c8-9282-45e5-b376-9c335e24573a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944512 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944564 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944612 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28cf7\" (UniqueName: \"kubernetes.io/projected/44392b57-bc6b-4a8b-8ff3-346fab2422af-kube-api-access-28cf7\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.944661 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.947401 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.947557 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts podName:e52ed138-0046-4bf6-b8f0-7bd5fb016f16 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:46.947538653 +0000 UTC m=+1375.654674786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts") pod "root-account-create-update-cc2d8" (UID: "e52ed138-0046-4bf6-b8f0-7bd5fb016f16") : configmap "openstack-scripts" not found Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.956582 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-logs" (OuterVolumeSpecName: "logs") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.956795 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "28ba670d-d2d7-47aa-bc54-6da4d0e532f3" (UID: "28ba670d-d2d7-47aa-bc54-6da4d0e532f3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.957500 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-config-data" (OuterVolumeSpecName: "config-data") pod "28ba670d-d2d7-47aa-bc54-6da4d0e532f3" (UID: "28ba670d-d2d7-47aa-bc54-6da4d0e532f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.959247 4787 projected.go:194] Error preparing data for projected volume kube-api-access-7rrqj for pod openstack/keystone-b1b9-account-create-update-2zlhz: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 26 18:06:44 crc kubenswrapper[4787]: E0126 18:06:44.966362 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:46.966295507 +0000 UTC m=+1375.673431650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7rrqj" (UniqueName: "kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.969558 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426dd23d-ce8f-4f72-aece-79585de1cef1-logs" (OuterVolumeSpecName: "logs") pod "426dd23d-ce8f-4f72-aece-79585de1cef1" (UID: "426dd23d-ce8f-4f72-aece-79585de1cef1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.984399 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.996206 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:06:44 crc kubenswrapper[4787]: I0126 18:06:44.997648 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-kube-api-access-shmsw" (OuterVolumeSpecName: "kube-api-access-shmsw") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "kube-api-access-shmsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.034141 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426dd23d-ce8f-4f72-aece-79585de1cef1-kube-api-access-cd59l" (OuterVolumeSpecName: "kube-api-access-cd59l") pod "426dd23d-ce8f-4f72-aece-79585de1cef1" (UID: "426dd23d-ce8f-4f72-aece-79585de1cef1"). InnerVolumeSpecName "kube-api-access-cd59l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.045631 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c27796ea-3db5-42ad-8b22-e4d774e28578-logs\") pod \"c27796ea-3db5-42ad-8b22-e4d774e28578\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.045697 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data-custom\") pod \"c27796ea-3db5-42ad-8b22-e4d774e28578\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.045729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data\") pod \"c27796ea-3db5-42ad-8b22-e4d774e28578\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.045767 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5wx2\" (UniqueName: \"kubernetes.io/projected/c27796ea-3db5-42ad-8b22-e4d774e28578-kube-api-access-g5wx2\") pod \"c27796ea-3db5-42ad-8b22-e4d774e28578\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.045785 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-combined-ca-bundle\") pod \"c27796ea-3db5-42ad-8b22-e4d774e28578\" (UID: \"c27796ea-3db5-42ad-8b22-e4d774e28578\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.045817 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p" (OuterVolumeSpecName: "kube-api-access-prt5p") pod "28ba670d-d2d7-47aa-bc54-6da4d0e532f3" (UID: "28ba670d-d2d7-47aa-bc54-6da4d0e532f3"). InnerVolumeSpecName "kube-api-access-prt5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046015 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prt5p\" (UniqueName: \"kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p\") pod \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\" (UID: \"28ba670d-d2d7-47aa-bc54-6da4d0e532f3\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046612 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426dd23d-ce8f-4f72-aece-79585de1cef1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046626 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046635 4787 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046646 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd59l\" (UniqueName: \"kubernetes.io/projected/426dd23d-ce8f-4f72-aece-79585de1cef1-kube-api-access-cd59l\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046655 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shmsw\" (UniqueName: \"kubernetes.io/projected/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-kube-api-access-shmsw\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046665 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: W0126 18:06:45.046728 4787 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/28ba670d-d2d7-47aa-bc54-6da4d0e532f3/volumes/kubernetes.io~projected/kube-api-access-prt5p Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.046741 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p" (OuterVolumeSpecName: "kube-api-access-prt5p") pod "28ba670d-d2d7-47aa-bc54-6da4d0e532f3" (UID: "28ba670d-d2d7-47aa-bc54-6da4d0e532f3"). InnerVolumeSpecName "kube-api-access-prt5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.047331 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27796ea-3db5-42ad-8b22-e4d774e28578-logs" (OuterVolumeSpecName: "logs") pod "c27796ea-3db5-42ad-8b22-e4d774e28578" (UID: "c27796ea-3db5-42ad-8b22-e4d774e28578"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.049807 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data" (OuterVolumeSpecName: "config-data") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.068695 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.068733 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3f8a7212-44ab-42f1-86be-8b79726fe4f8" (UID: "3f8a7212-44ab-42f1-86be-8b79726fe4f8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.092684 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27796ea-3db5-42ad-8b22-e4d774e28578-kube-api-access-g5wx2" (OuterVolumeSpecName: "kube-api-access-g5wx2") pod "c27796ea-3db5-42ad-8b22-e4d774e28578" (UID: "c27796ea-3db5-42ad-8b22-e4d774e28578"). InnerVolumeSpecName "kube-api-access-g5wx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.105738 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c27796ea-3db5-42ad-8b22-e4d774e28578" (UID: "c27796ea-3db5-42ad-8b22-e4d774e28578"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.117841 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426dd23d-ce8f-4f72-aece-79585de1cef1" (UID: "426dd23d-ce8f-4f72-aece-79585de1cef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.128614 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c27796ea-3db5-42ad-8b22-e4d774e28578" (UID: "c27796ea-3db5-42ad-8b22-e4d774e28578"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.130030 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.139808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28ba670d-d2d7-47aa-bc54-6da4d0e532f3" (UID: "28ba670d-d2d7-47aa-bc54-6da4d0e532f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.147173 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.147890 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v45x6\" (UniqueName: \"kubernetes.io/projected/1e181982-6b54-4a6f-a52c-eb025b767fb0-kube-api-access-v45x6\") pod \"1e181982-6b54-4a6f-a52c-eb025b767fb0\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148050 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148069 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data\") pod \"1e181982-6b54-4a6f-a52c-eb025b767fb0\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148157 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-internal-tls-certs\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148223 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148381 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data-custom\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148479 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-public-tls-certs\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data-custom\") pod \"1e181982-6b54-4a6f-a52c-eb025b767fb0\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148540 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e181982-6b54-4a6f-a52c-eb025b767fb0-logs\") pod \"1e181982-6b54-4a6f-a52c-eb025b767fb0\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148587 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-logs\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148625 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle\") pod \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\" (UID: \"12e77a01-1165-4f0a-ad35-fe127b5ae6c0\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148657 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-combined-ca-bundle\") pod \"1e181982-6b54-4a6f-a52c-eb025b767fb0\" (UID: \"1e181982-6b54-4a6f-a52c-eb025b767fb0\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148728 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jgv\" (UniqueName: \"kubernetes.io/projected/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-kube-api-access-d6jgv\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.148759 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-combined-ca-bundle\") pod \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\" (UID: \"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa\") " Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149354 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149374 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prt5p\" (UniqueName: \"kubernetes.io/projected/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-kube-api-access-prt5p\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149390 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149403 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149414 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c27796ea-3db5-42ad-8b22-e4d774e28578-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149428 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8a7212-44ab-42f1-86be-8b79726fe4f8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149439 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149451 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149464 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149476 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5wx2\" (UniqueName: \"kubernetes.io/projected/c27796ea-3db5-42ad-8b22-e4d774e28578-kube-api-access-g5wx2\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149487 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149501 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: W0126 18:06:45.149801 4787 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/12e77a01-1165-4f0a-ad35-fe127b5ae6c0/volumes/kubernetes.io~secret/combined-ca-bundle Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.149926 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.151236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-logs" (OuterVolumeSpecName: "logs") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.152396 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e181982-6b54-4a6f-a52c-eb025b767fb0-kube-api-access-v45x6" (OuterVolumeSpecName: "kube-api-access-v45x6") pod "1e181982-6b54-4a6f-a52c-eb025b767fb0" (UID: "1e181982-6b54-4a6f-a52c-eb025b767fb0"). InnerVolumeSpecName "kube-api-access-v45x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.156594 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1e181982-6b54-4a6f-a52c-eb025b767fb0" (UID: "1e181982-6b54-4a6f-a52c-eb025b767fb0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.179333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.196277 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e181982-6b54-4a6f-a52c-eb025b767fb0-logs" (OuterVolumeSpecName: "logs") pod "1e181982-6b54-4a6f-a52c-eb025b767fb0" (UID: "1e181982-6b54-4a6f-a52c-eb025b767fb0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.197413 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-kube-api-access-d6jgv" (OuterVolumeSpecName: "kube-api-access-d6jgv") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "kube-api-access-d6jgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.218062 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254402 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254439 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254451 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e181982-6b54-4a6f-a52c-eb025b767fb0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254462 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254473 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254481 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jgv\" (UniqueName: \"kubernetes.io/projected/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-kube-api-access-d6jgv\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254491 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v45x6\" (UniqueName: \"kubernetes.io/projected/1e181982-6b54-4a6f-a52c-eb025b767fb0-kube-api-access-v45x6\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.254500 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.280219 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.281612 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-config-data" (OuterVolumeSpecName: "config-data") pod "426dd23d-ce8f-4f72-aece-79585de1cef1" (UID: "426dd23d-ce8f-4f72-aece-79585de1cef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.288908 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.330090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.342440 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data" (OuterVolumeSpecName: "config-data") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.346869 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.351858 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357350 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357388 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357403 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357416 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357428 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357440 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.357451 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.367679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data" (OuterVolumeSpecName: "config-data") pod "1e181982-6b54-4a6f-a52c-eb025b767fb0" (UID: "1e181982-6b54-4a6f-a52c-eb025b767fb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.381180 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-config-data" (OuterVolumeSpecName: "config-data") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.386302 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.398796 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "28ba670d-d2d7-47aa-bc54-6da4d0e532f3" (UID: "28ba670d-d2d7-47aa-bc54-6da4d0e532f3"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.422902 4787 generic.go:334] "Generic (PLEG): container finished" podID="41cca919-781a-48fb-99c1-ec7ebbb7c601" containerID="dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5" exitCode=0 Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.422982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"41cca919-781a-48fb-99c1-ec7ebbb7c601","Type":"ContainerDied","Data":"dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.423009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"41cca919-781a-48fb-99c1-ec7ebbb7c601","Type":"ContainerDied","Data":"c63cd4f6718b8b3c9bb4e507576f786b503e50beb8b8bfa0c889831adf378f29"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.423020 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c63cd4f6718b8b3c9bb4e507576f786b503e50beb8b8bfa0c889831adf378f29" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.423512 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "44392b57-bc6b-4a8b-8ff3-346fab2422af" (UID: "44392b57-bc6b-4a8b-8ff3-346fab2422af"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.423818 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.425234 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b64b47c9-scfhg" event={"ID":"1e181982-6b54-4a6f-a52c-eb025b767fb0","Type":"ContainerDied","Data":"ae6a4af4234275e5f4e927bcd6b7e953826b39e2517187743b3408b34d7b57d3"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.425278 4787 scope.go:117] "RemoveContainer" containerID="96b6dc83a1412919436b7c09b8eb24e55f4c4ec160f6f0f178b289df71a6d06a" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.425408 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b64b47c9-scfhg" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.431108 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e181982-6b54-4a6f-a52c-eb025b767fb0" (UID: "1e181982-6b54-4a6f-a52c-eb025b767fb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.437741 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"426dd23d-ce8f-4f72-aece-79585de1cef1","Type":"ContainerDied","Data":"25662c378377299e3b878d86b551bd23a25f1a7e2ef8681fa62cb4c26203912b"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.437876 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.444367 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd698097-04e8-4ad2-bc6e-fbdf16dfd12a","Type":"ContainerDied","Data":"c6ca2913e07b39b8260513df9634fa8b073f3967c79fb69a1deb111145bc9697"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.444469 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.448102 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cc2d8" event={"ID":"e52ed138-0046-4bf6-b8f0-7bd5fb016f16","Type":"ContainerDied","Data":"a5d5338d9c8213b816cbb1421fa1bd0746a67c5143c7fb9b9ef0237407ce911e"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.454518 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d5338d9c8213b816cbb1421fa1bd0746a67c5143c7fb9b9ef0237407ce911e" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.450329 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" (UID: "fa430a7c-4527-4fdd-aab8-f0f2588ccdaa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.460653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28ba670d-d2d7-47aa-bc54-6da4d0e532f3","Type":"ContainerDied","Data":"199d8b8e9622bcca1bb712c199085fbec6f15c848c4c4c9c0fc22a7f54cbb51e"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.460778 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.462975 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.464979 4787 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba670d-d2d7-47aa-bc54-6da4d0e532f3-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.465065 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e181982-6b54-4a6f-a52c-eb025b767fb0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.465119 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.465170 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.465238 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44392b57-bc6b-4a8b-8ff3-346fab2422af-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.465291 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.465342 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: E0126 18:06:45.477484 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe is running failed: container process not found" containerID="b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.481459 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-684775777b-sd52g" event={"ID":"fa430a7c-4527-4fdd-aab8-f0f2588ccdaa","Type":"ContainerDied","Data":"62a8d7fcb7d5c41b0f8668408872d9a8c778a4ed9bdb0a6bb74728f2b02368e4"} Jan 26 18:06:45 crc kubenswrapper[4787]: E0126 18:06:45.481538 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe is running failed: container process not found" containerID="b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.481785 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-684775777b-sd52g" Jan 26 18:06:45 crc kubenswrapper[4787]: E0126 18:06:45.482613 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe is running failed: container process not found" containerID="b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 26 18:06:45 crc kubenswrapper[4787]: E0126 18:06:45.482651 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="galera" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.489895 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-config-data" (OuterVolumeSpecName: "config-data") pod "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" (UID: "fd698097-04e8-4ad2-bc6e-fbdf16dfd12a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.491724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"87c0c3c8-9282-45e5-b376-9c335e24573a","Type":"ContainerDied","Data":"74898663b7ab36600082e0191660aade4b8129774e844edf18695597665d2e10"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.491928 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.500929 4787 generic.go:334] "Generic (PLEG): container finished" podID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" containerID="65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa" exitCode=0 Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.501009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"afd68dd7-739f-4cd0-b3eb-c786b79c4b40","Type":"ContainerDied","Data":"65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.501035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"afd68dd7-739f-4cd0-b3eb-c786b79c4b40","Type":"ContainerDied","Data":"74078478df63213f01b88149228450d5550b6e6e9e883ed28a2d025f36e62d39"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.501047 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74078478df63213f01b88149228450d5550b6e6e9e883ed28a2d025f36e62d39" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.501453 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.504280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12e77a01-1165-4f0a-ad35-fe127b5ae6c0","Type":"ContainerDied","Data":"59d0423d93e070ac4ae6d1e6e027bea79213084a80f5a00eba9dbcf8ea8e9d91"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.504432 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.511788 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" event={"ID":"c27796ea-3db5-42ad-8b22-e4d774e28578","Type":"ContainerDied","Data":"bec22572ebbabb3f9460ce0b7fa55df8e89db2afa59233b739faf142480eacce"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.511918 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.534054 4787 generic.go:334] "Generic (PLEG): container finished" podID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerID="b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe" exitCode=0 Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.534170 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545b-account-create-update-wbcdd" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.535121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d3871ebb-6b25-4e36-a3a1-3e9a220768f5","Type":"ContainerDied","Data":"b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe"} Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.535226 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.537785 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-571d-account-create-update-2kgf7" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.538383 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fc4-account-create-update-zr6c9" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.539350 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.542340 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87c0c3c8-9282-45e5-b376-9c335e24573a" (UID: "87c0c3c8-9282-45e5-b376-9c335e24573a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.542569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.543762 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-558e-account-create-update-b4hcn" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.545063 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.546002 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b7546858-x7nbv" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.567266 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-config-data" (OuterVolumeSpecName: "config-data") pod "12e77a01-1165-4f0a-ad35-fe127b5ae6c0" (UID: "12e77a01-1165-4f0a-ad35-fe127b5ae6c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.567542 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.567580 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c0c3c8-9282-45e5-b376-9c335e24573a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.567589 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.567615 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e77a01-1165-4f0a-ad35-fe127b5ae6c0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.599399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "426dd23d-ce8f-4f72-aece-79585de1cef1" (UID: "426dd23d-ce8f-4f72-aece-79585de1cef1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.602464 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data" (OuterVolumeSpecName: "config-data") pod "c27796ea-3db5-42ad-8b22-e4d774e28578" (UID: "c27796ea-3db5-42ad-8b22-e4d774e28578"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:45 crc kubenswrapper[4787]: E0126 18:06:45.671247 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 26 18:06:45 crc kubenswrapper[4787]: E0126 18:06:45.671322 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data podName:57e65f25-43dd-4baf-b2fa-7256dcbd452d nodeName:}" failed. No retries permitted until 2026-01-26 18:06:53.67130442 +0000 UTC m=+1382.378440553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data") pod "rabbitmq-server-0" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d") : configmap "rabbitmq-config-data" not found Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.672756 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27796ea-3db5-42ad-8b22-e4d774e28578-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.672807 4787 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/426dd23d-ce8f-4f72-aece-79585de1cef1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.697868 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b418d30-17a4-4cd9-a16d-c10b1f030492" path="/var/lib/kubelet/pods/4b418d30-17a4-4cd9-a16d-c10b1f030492/volumes" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.698742 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6333914f-1303-43b8-ac9b-88c29e2bea64" path="/var/lib/kubelet/pods/6333914f-1303-43b8-ac9b-88c29e2bea64/volumes" Jan 26 18:06:45 crc kubenswrapper[4787]: I0126 18:06:45.756117 4787 scope.go:117] "RemoveContainer" containerID="fe98b6784049128c409c7bb089303be8bb4a940ea14a70bee91761c87ae35d2b" Jan 26 18:06:46 crc kubenswrapper[4787]: E0126 18:06:45.812590 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1 is running failed: container process not found" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 26 18:06:46 crc kubenswrapper[4787]: E0126 18:06:45.813423 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1 is running failed: container process not found" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 26 18:06:46 crc kubenswrapper[4787]: E0126 18:06:45.814007 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1 is running failed: container process not found" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 26 18:06:46 crc kubenswrapper[4787]: E0126 18:06:45.814040 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="ovn-northd" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.117161 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.143885 4787 scope.go:117] "RemoveContainer" containerID="ebda6eab466c6a4894926fb8d2902783e462f0d182221a273f361e590df56e44" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.188042 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-combined-ca-bundle\") pod \"41cca919-781a-48fb-99c1-ec7ebbb7c601\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.188091 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-config-data\") pod \"41cca919-781a-48fb-99c1-ec7ebbb7c601\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.188233 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szh75\" (UniqueName: \"kubernetes.io/projected/41cca919-781a-48fb-99c1-ec7ebbb7c601-kube-api-access-szh75\") pod \"41cca919-781a-48fb-99c1-ec7ebbb7c601\" (UID: \"41cca919-781a-48fb-99c1-ec7ebbb7c601\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.260085 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cca919-781a-48fb-99c1-ec7ebbb7c601-kube-api-access-szh75" (OuterVolumeSpecName: "kube-api-access-szh75") pod "41cca919-781a-48fb-99c1-ec7ebbb7c601" (UID: "41cca919-781a-48fb-99c1-ec7ebbb7c601"). InnerVolumeSpecName "kube-api-access-szh75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.289253 4787 scope.go:117] "RemoveContainer" containerID="cae0ec0c014dae5fd5a54d328b9898e6d68c67c322796b11b3a774d86a45fd24" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.290647 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szh75\" (UniqueName: \"kubernetes.io/projected/41cca919-781a-48fb-99c1-ec7ebbb7c601-kube-api-access-szh75\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.296227 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.308119 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-config-data" (OuterVolumeSpecName: "config-data") pod "41cca919-781a-48fb-99c1-ec7ebbb7c601" (UID: "41cca919-781a-48fb-99c1-ec7ebbb7c601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.349550 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41cca919-781a-48fb-99c1-ec7ebbb7c601" (UID: "41cca919-781a-48fb-99c1-ec7ebbb7c601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.391996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f7f8\" (UniqueName: \"kubernetes.io/projected/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-kube-api-access-5f7f8\") pod \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.392177 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts\") pod \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\" (UID: \"e52ed138-0046-4bf6-b8f0-7bd5fb016f16\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.392641 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.392658 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cca919-781a-48fb-99c1-ec7ebbb7c601-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.393292 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e52ed138-0046-4bf6-b8f0-7bd5fb016f16" (UID: "e52ed138-0046-4bf6-b8f0-7bd5fb016f16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.395363 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-kube-api-access-5f7f8" (OuterVolumeSpecName: "kube-api-access-5f7f8") pod "e52ed138-0046-4bf6-b8f0-7bd5fb016f16" (UID: "e52ed138-0046-4bf6-b8f0-7bd5fb016f16"). InnerVolumeSpecName "kube-api-access-5f7f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.404877 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.412628 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.418453 4787 scope.go:117] "RemoveContainer" containerID="011bedc70e267a9340d9ea488ce83a6f2966f96cd9f36e47bd7028368ceb1135" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.435938 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_208dca76-0c20-4fd9-a685-76144777c48c/ovn-northd/0.log" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.436075 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.507870 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-metrics-certs-tls-certs\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.507972 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kolla-config\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508008 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-combined-ca-bundle\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-ovn-northd-tls-certs\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508091 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508139 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-default\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508216 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-combined-ca-bundle\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-config-data\") pod \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508289 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208dca76-0c20-4fd9-a685-76144777c48c-ovn-rundir\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508320 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngg4t\" (UniqueName: \"kubernetes.io/projected/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-kube-api-access-ngg4t\") pod \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508346 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m5mw\" (UniqueName: \"kubernetes.io/projected/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kube-api-access-4m5mw\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508371 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-operator-scripts\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-generated\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508417 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-config\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508449 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxn77\" (UniqueName: \"kubernetes.io/projected/208dca76-0c20-4fd9-a685-76144777c48c-kube-api-access-cxn77\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508480 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-scripts\") pod \"208dca76-0c20-4fd9-a685-76144777c48c\" (UID: \"208dca76-0c20-4fd9-a685-76144777c48c\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508524 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-galera-tls-certs\") pod \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\" (UID: \"d3871ebb-6b25-4e36-a3a1-3e9a220768f5\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508548 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-combined-ca-bundle\") pod \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\" (UID: \"afd68dd7-739f-4cd0-b3eb-c786b79c4b40\") " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.508597 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.512962 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f7f8\" (UniqueName: \"kubernetes.io/projected/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-kube-api-access-5f7f8\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.512996 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e52ed138-0046-4bf6-b8f0-7bd5fb016f16-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.513012 4787 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.521576 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/208dca76-0c20-4fd9-a685-76144777c48c-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.524231 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-zr6c9"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.528559 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-scripts" (OuterVolumeSpecName: "scripts") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.529337 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.529832 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.530585 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-config" (OuterVolumeSpecName: "config") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.531624 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.542880 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5fc4-account-create-update-zr6c9"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.544691 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kube-api-access-4m5mw" (OuterVolumeSpecName: "kube-api-access-4m5mw") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "kube-api-access-4m5mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.557407 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-kube-api-access-ngg4t" (OuterVolumeSpecName: "kube-api-access-ngg4t") pod "afd68dd7-739f-4cd0-b3eb-c786b79c4b40" (UID: "afd68dd7-739f-4cd0-b3eb-c786b79c4b40"). InnerVolumeSpecName "kube-api-access-ngg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.561309 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.583120 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208dca76-0c20-4fd9-a685-76144777c48c-kube-api-access-cxn77" (OuterVolumeSpecName: "kube-api-access-cxn77") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "kube-api-access-cxn77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.606908 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_208dca76-0c20-4fd9-a685-76144777c48c/ovn-northd/0.log" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.606997 4787 generic.go:334] "Generic (PLEG): container finished" podID="208dca76-0c20-4fd9-a685-76144777c48c" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" exitCode=139 Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.607094 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"208dca76-0c20-4fd9-a685-76144777c48c","Type":"ContainerDied","Data":"17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1"} Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.607130 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"208dca76-0c20-4fd9-a685-76144777c48c","Type":"ContainerDied","Data":"dca345c6494966284813e9102649c174b28820808082162b5a1c018cf28a58d5"} Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.607123 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.612674 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617238 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617279 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208dca76-0c20-4fd9-a685-76144777c48c-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617291 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngg4t\" (UniqueName: \"kubernetes.io/projected/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-kube-api-access-ngg4t\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617301 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m5mw\" (UniqueName: \"kubernetes.io/projected/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-kube-api-access-4m5mw\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617315 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617327 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617337 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617346 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxn77\" (UniqueName: \"kubernetes.io/projected/208dca76-0c20-4fd9-a685-76144777c48c-kube-api-access-cxn77\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.617356 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208dca76-0c20-4fd9-a685-76144777c48c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.618023 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.619129 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.623575 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.624022 4787 scope.go:117] "RemoveContainer" containerID="0bdd2c4e02e5b71c8136f8efddfe53e1a758f24f948ca99efec537c186ed5b0c" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.626248 4787 generic.go:334] "Generic (PLEG): container finished" podID="d85cc53a-0132-4491-82a1-056badced30c" containerID="0b9197c0a132a15fde8443730e1e6afbd7356943bd0400304fb06f44082983a1" exitCode=0 Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.626303 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b5d4b8cb-wb8cc" event={"ID":"d85cc53a-0132-4491-82a1-056badced30c","Type":"ContainerDied","Data":"0b9197c0a132a15fde8443730e1e6afbd7356943bd0400304fb06f44082983a1"} Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.639139 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69b7546858-x7nbv"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.640548 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-config-data" (OuterVolumeSpecName: "config-data") pod "afd68dd7-739f-4cd0-b3eb-c786b79c4b40" (UID: "afd68dd7-739f-4cd0-b3eb-c786b79c4b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.666232 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afd68dd7-739f-4cd0-b3eb-c786b79c4b40" (UID: "afd68dd7-739f-4cd0-b3eb-c786b79c4b40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.666289 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.666418 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69b7546858-x7nbv"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.666468 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d3871ebb-6b25-4e36-a3a1-3e9a220768f5","Type":"ContainerDied","Data":"d040f7addf0a6b6714329b0efa81f4294fccfb9bdfd3978b7449205fa25a0ee3"} Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.678214 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.678249 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cc2d8" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.678273 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.689151 4787 scope.go:117] "RemoveContainer" containerID="cacee73bf45c04d60820e3ca12199d2fcac4ad380185a4c29165ce46e0b6bc52" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.690008 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.706147 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d3871ebb-6b25-4e36-a3a1-3e9a220768f5" (UID: "d3871ebb-6b25-4e36-a3a1-3e9a220768f5"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.722283 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.725150 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.729078 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.732192 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.732249 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.732266 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.732278 4787 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3871ebb-6b25-4e36-a3a1-3e9a220768f5-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.732292 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd68dd7-739f-4cd0-b3eb-c786b79c4b40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.746326 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "208dca76-0c20-4fd9-a685-76144777c48c" (UID: "208dca76-0c20-4fd9-a685-76144777c48c"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.755541 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.758045 4787 scope.go:117] "RemoveContainer" containerID="5b93e665a1925c67d4a5127df1172c0fcd71ee7863b68ee1361c8117a65ecb61" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.791527 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-545b-account-create-update-wbcdd"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.808385 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-545b-account-create-update-wbcdd"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.825818 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.836519 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.836651 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/208dca76-0c20-4fd9-a685-76144777c48c-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.836717 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.839439 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.848216 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-684775777b-sd52g"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.853069 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-684775777b-sd52g"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.870545 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.870615 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.871364 4787 scope.go:117] "RemoveContainer" containerID="346eebb01a3f72fc52b37308e2a116af63d29c1e13d92d2e955b17ea8a8bd265" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.904323 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-2kgf7"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.912006 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-571d-account-create-update-2kgf7"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.935637 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b1b9-account-create-update-2zlhz"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.938790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts\") pod \"keystone-b1b9-account-create-update-2zlhz\" (UID: \"b7c7197d-6ffc-4db0-8074-ff1782c9ce7a\") " pod="openstack/keystone-b1b9-account-create-update-2zlhz" Jan 26 18:06:46 crc kubenswrapper[4787]: E0126 18:06:46.939028 4787 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 26 18:06:46 crc kubenswrapper[4787]: E0126 18:06:46.939115 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts podName:b7c7197d-6ffc-4db0-8074-ff1782c9ce7a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:50.939070025 +0000 UTC m=+1379.646206158 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts") pod "keystone-b1b9-account-create-update-2zlhz" (UID: "b7c7197d-6ffc-4db0-8074-ff1782c9ce7a") : configmap "openstack-scripts" not found Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.951578 4787 scope.go:117] "RemoveContainer" containerID="ef0bc42b7bd2dab6b5fbabd59a7d254961bab025f7081c2b30ff94990add57bc" Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.954619 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b1b9-account-create-update-2zlhz"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.963828 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6b64b47c9-scfhg"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.981911 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6b64b47c9-scfhg"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.993052 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:06:46 crc kubenswrapper[4787]: I0126 18:06:46.995343 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.008763 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.014670 4787 scope.go:117] "RemoveContainer" containerID="ae0907e003fb71214ee6b4a55004aff424e981221e4c8a3a5bf129205ee9d3ee" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.015557 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.026929 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.029164 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="6333914f-1303-43b8-ac9b-88c29e2bea64" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.195:6080/vnc_lite.html\": context deadline exceeded" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.034249 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-66c5b5fbf4-wjxfs"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.040642 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.040677 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrqj\" (UniqueName: \"kubernetes.io/projected/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a-kube-api-access-7rrqj\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.047767 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-558e-account-create-update-b4hcn"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.054655 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-558e-account-create-update-b4hcn"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.060184 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cc2d8"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.066493 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cc2d8"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.075983 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.082821 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.088813 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.098175 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.115115 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.130075 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.148755 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.149350 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.162388 4787 scope.go:117] "RemoveContainer" containerID="822ecb7ff1646f9b506cca634869b822b27bb00a188755e74a6e4b48e2ab3e95" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.173829 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.176692 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.187475 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.195690 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.220668 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-internal-tls-certs\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249232 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-credential-keys\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-scripts\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57ddk\" (UniqueName: \"kubernetes.io/projected/d85cc53a-0132-4491-82a1-056badced30c-kube-api-access-57ddk\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249444 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-public-tls-certs\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-fernet-keys\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-combined-ca-bundle\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.249597 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-config-data\") pod \"d85cc53a-0132-4491-82a1-056badced30c\" (UID: \"d85cc53a-0132-4491-82a1-056badced30c\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.260444 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.261487 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-scripts" (OuterVolumeSpecName: "scripts") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.272383 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85cc53a-0132-4491-82a1-056badced30c-kube-api-access-57ddk" (OuterVolumeSpecName: "kube-api-access-57ddk") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "kube-api-access-57ddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.272496 4787 scope.go:117] "RemoveContainer" containerID="e68164068e1c1a9792fd558fe1417586d54a36d32fce7de08c55887a3e3bda6f" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.289100 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.305789 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.344743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-config-data" (OuterVolumeSpecName: "config-data") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.354794 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.354839 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.354853 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.354864 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.354877 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.354889 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57ddk\" (UniqueName: \"kubernetes.io/projected/d85cc53a-0132-4491-82a1-056badced30c-kube-api-access-57ddk\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.357550 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.367127 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.369477 4787 scope.go:117] "RemoveContainer" containerID="e8e42b27e122fd736da88e5dda90848d4ff33a360274bad85a5080141095b729" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.378146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d85cc53a-0132-4491-82a1-056badced30c" (UID: "d85cc53a-0132-4491-82a1-056badced30c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.402487 4787 scope.go:117] "RemoveContainer" containerID="26320b4b7ea2c7895920e7f17135b8543b24a3e99a032d58fbe702acea73a2ec" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.423840 4787 scope.go:117] "RemoveContainer" containerID="a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.456866 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgqln\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-kube-api-access-mgqln\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.456920 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457007 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57e65f25-43dd-4baf-b2fa-7256dcbd452d-erlang-cookie-secret\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457037 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57e65f25-43dd-4baf-b2fa-7256dcbd452d-pod-info\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-erlang-cookie\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457131 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-plugins-conf\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457151 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-server-conf\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457201 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-tls\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-confd\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-plugins\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\" (UID: \"57e65f25-43dd-4baf-b2fa-7256dcbd452d\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457636 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.457655 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85cc53a-0132-4491-82a1-056badced30c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.457734 4787 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.457785 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:55.457770092 +0000 UTC m=+1384.164906225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scheduler-config-data" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.458321 4787 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.458405 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data podName:bb1abb80-0591-49c7-b549-969066392a5a nodeName:}" failed. No retries permitted until 2026-01-26 18:06:55.458377166 +0000 UTC m=+1384.165513379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data") pod "rabbitmq-cell1-server-0" (UID: "bb1abb80-0591-49c7-b549-969066392a5a") : configmap "rabbitmq-cell1-config-data" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.458464 4787 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.458495 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:55.458486109 +0000 UTC m=+1384.165622242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-config-data" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.458528 4787 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.458549 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts podName:38c9405f-74d3-4282-90ac-0a9909a68b43 nodeName:}" failed. No retries permitted until 2026-01-26 18:06:55.45854353 +0000 UTC m=+1384.165679653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts") pod "cinder-scheduler-0" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43") : secret "cinder-scripts" not found Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.458810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.459312 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.459307 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.463211 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-kube-api-access-mgqln" (OuterVolumeSpecName: "kube-api-access-mgqln") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "kube-api-access-mgqln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.465565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.469659 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e65f25-43dd-4baf-b2fa-7256dcbd452d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.470362 4787 scope.go:117] "RemoveContainer" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.478672 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/57e65f25-43dd-4baf-b2fa-7256dcbd452d-pod-info" (OuterVolumeSpecName: "pod-info") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.478782 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.479241 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data" (OuterVolumeSpecName: "config-data") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.505759 4787 scope.go:117] "RemoveContainer" containerID="a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.506323 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd\": container with ID starting with a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd not found: ID does not exist" containerID="a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.506374 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd"} err="failed to get container status \"a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd\": rpc error: code = NotFound desc = could not find container \"a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd\": container with ID starting with a60012459a3ce7016e4642021955fad940e5495b0683a7b0718378d18854c1fd not found: ID does not exist" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.506402 4787 scope.go:117] "RemoveContainer" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.506721 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1\": container with ID starting with 17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1 not found: ID does not exist" containerID="17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.506767 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1"} err="failed to get container status \"17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1\": rpc error: code = NotFound desc = could not find container \"17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1\": container with ID starting with 17daccd465d941b87b581e064e27ee25935b9a440dcbd43bf4992be9522d0bc1 not found: ID does not exist" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.506796 4787 scope.go:117] "RemoveContainer" containerID="b6fcdda24b66c4d1afaf8d2b19ce256b6523b880c84f7b52437eb28e1408e1fe" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.529620 4787 scope.go:117] "RemoveContainer" containerID="27457a194befcc8699c0109b2497a7ad92fb469cd60a8cf8cacda2c5dfed2719" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.539128 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-server-conf" (OuterVolumeSpecName: "server-conf") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.554652 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "57e65f25-43dd-4baf-b2fa-7256dcbd452d" (UID: "57e65f25-43dd-4baf-b2fa-7256dcbd452d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559460 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559489 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559499 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559509 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559519 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559526 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57e65f25-43dd-4baf-b2fa-7256dcbd452d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559552 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559561 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgqln\" (UniqueName: \"kubernetes.io/projected/57e65f25-43dd-4baf-b2fa-7256dcbd452d-kube-api-access-mgqln\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559571 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57e65f25-43dd-4baf-b2fa-7256dcbd452d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559581 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57e65f25-43dd-4baf-b2fa-7256dcbd452d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.559589 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57e65f25-43dd-4baf-b2fa-7256dcbd452d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.586862 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.590395 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.590726 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.591011 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.591035 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.593479 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.602142 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.604035 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.604113 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.614574 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.627931 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" path="/var/lib/kubelet/pods/12e77a01-1165-4f0a-ad35-fe127b5ae6c0/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.629100 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" path="/var/lib/kubelet/pods/1e181982-6b54-4a6f-a52c-eb025b767fb0/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.629729 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="208dca76-0c20-4fd9-a685-76144777c48c" path="/var/lib/kubelet/pods/208dca76-0c20-4fd9-a685-76144777c48c/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.630805 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ba670d-d2d7-47aa-bc54-6da4d0e532f3" path="/var/lib/kubelet/pods/28ba670d-d2d7-47aa-bc54-6da4d0e532f3/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.631307 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" path="/var/lib/kubelet/pods/3f8a7212-44ab-42f1-86be-8b79726fe4f8/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.631843 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41cca919-781a-48fb-99c1-ec7ebbb7c601" path="/var/lib/kubelet/pods/41cca919-781a-48fb-99c1-ec7ebbb7c601/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.633322 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" path="/var/lib/kubelet/pods/426dd23d-ce8f-4f72-aece-79585de1cef1/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.634110 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" path="/var/lib/kubelet/pods/44392b57-bc6b-4a8b-8ff3-346fab2422af/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.635472 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751af144-58aa-4770-b232-98816c3498d2" path="/var/lib/kubelet/pods/751af144-58aa-4770-b232-98816c3498d2/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.635899 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" path="/var/lib/kubelet/pods/87c0c3c8-9282-45e5-b376-9c335e24573a/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.636475 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add7dee2-b5b8-4a04-84fd-76f323e7e444" path="/var/lib/kubelet/pods/add7dee2-b5b8-4a04-84fd-76f323e7e444/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.636816 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" path="/var/lib/kubelet/pods/afd68dd7-739f-4cd0-b3eb-c786b79c4b40/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.637713 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c7197d-6ffc-4db0-8074-ff1782c9ce7a" path="/var/lib/kubelet/pods/b7c7197d-6ffc-4db0-8074-ff1782c9ce7a/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.638162 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" path="/var/lib/kubelet/pods/c27796ea-3db5-42ad-8b22-e4d774e28578/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.638810 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b36455-e624-4719-b164-29f4afacfb4d" path="/var/lib/kubelet/pods/c9b36455-e624-4719-b164-29f4afacfb4d/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.639222 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca591aea-a146-4b51-887e-9688a249fdad" path="/var/lib/kubelet/pods/ca591aea-a146-4b51-887e-9688a249fdad/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.641697 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" path="/var/lib/kubelet/pods/d3871ebb-6b25-4e36-a3a1-3e9a220768f5/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.642616 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4a77a8-4971-432f-81db-ac6be78f24a0" path="/var/lib/kubelet/pods/df4a77a8-4971-432f-81db-ac6be78f24a0/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.643047 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" path="/var/lib/kubelet/pods/e52ed138-0046-4bf6-b8f0-7bd5fb016f16/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.643543 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43c375f-1176-442e-98fd-5d9acba6e199" path="/var/lib/kubelet/pods/f43c375f-1176-442e-98fd-5d9acba6e199/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.649485 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" path="/var/lib/kubelet/pods/fa430a7c-4527-4fdd-aab8-f0f2588ccdaa/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.650153 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" path="/var/lib/kubelet/pods/fd698097-04e8-4ad2-bc6e-fbdf16dfd12a/volumes" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.661102 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.699544 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" probeResult="failure" output="" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.701937 4787 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 26 18:06:47 crc kubenswrapper[4787]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-26T18:06:40Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 26 18:06:47 crc kubenswrapper[4787]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Jan 26 18:06:47 crc kubenswrapper[4787]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-5rlw8" message=< Jan 26 18:06:47 crc kubenswrapper[4787]: Exiting ovn-controller (1) [FAILED] Jan 26 18:06:47 crc kubenswrapper[4787]: Killing ovn-controller (1) [ OK ] Jan 26 18:06:47 crc kubenswrapper[4787]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 26 18:06:47 crc kubenswrapper[4787]: 2026-01-26T18:06:40Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 26 18:06:47 crc kubenswrapper[4787]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Jan 26 18:06:47 crc kubenswrapper[4787]: > Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.701991 4787 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 26 18:06:47 crc kubenswrapper[4787]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-26T18:06:40Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 26 18:06:47 crc kubenswrapper[4787]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Jan 26 18:06:47 crc kubenswrapper[4787]: > pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" containerID="cri-o://0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.702034 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" containerID="cri-o://0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" gracePeriod=22 Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.702093 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e is running failed: container process not found" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.705536 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e is running failed: container process not found" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.706599 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e is running failed: container process not found" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.706641 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-5rlw8" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.714113 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb1abb80-0591-49c7-b549-969066392a5a" containerID="6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd" exitCode=0 Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.714214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb1abb80-0591-49c7-b549-969066392a5a","Type":"ContainerDied","Data":"6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd"} Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.714264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb1abb80-0591-49c7-b549-969066392a5a","Type":"ContainerDied","Data":"8d9e1d13dba7f54258e89da7a97d5d2f915dcdc1adce2a1e9d02e8b75ea65ea6"} Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.714289 4787 scope.go:117] "RemoveContainer" containerID="6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.714482 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.724129 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b5d4b8cb-wb8cc" event={"ID":"d85cc53a-0132-4491-82a1-056badced30c","Type":"ContainerDied","Data":"a03c80c28bb651ac333fadc054b6010121b162d5d635ef30bcd40434e7f77eb0"} Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.724268 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b5d4b8cb-wb8cc" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.726624 4787 generic.go:334] "Generic (PLEG): container finished" podID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerID="53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad" exitCode=0 Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.726678 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57e65f25-43dd-4baf-b2fa-7256dcbd452d","Type":"ContainerDied","Data":"53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad"} Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.726695 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57e65f25-43dd-4baf-b2fa-7256dcbd452d","Type":"ContainerDied","Data":"8ddf2f9cc6fdc0aed0b4ecb985b107a7105c3dc393c8836e8e36c2f918115c9d"} Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.726768 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763367 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb1abb80-0591-49c7-b549-969066392a5a-erlang-cookie-secret\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763429 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763464 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-tls\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763491 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-plugins\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763528 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-server-conf\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763611 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.763639 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-plugins-conf\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764006 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-erlang-cookie\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764051 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb1abb80-0591-49c7-b549-969066392a5a-pod-info\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764096 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-confd\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764118 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csxbl\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-kube-api-access-csxbl\") pod \"bb1abb80-0591-49c7-b549-969066392a5a\" (UID: \"bb1abb80-0591-49c7-b549-969066392a5a\") " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764153 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764416 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764785 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764809 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.764860 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.769408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1abb80-0591-49c7-b549-969066392a5a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.769418 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bb1abb80-0591-49c7-b549-969066392a5a-pod-info" (OuterVolumeSpecName: "pod-info") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.769527 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.774327 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.775110 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-kube-api-access-csxbl" (OuterVolumeSpecName: "kube-api-access-csxbl") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "kube-api-access-csxbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.795434 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data" (OuterVolumeSpecName: "config-data") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.799200 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-76b5d4b8cb-wb8cc"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.804660 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-76b5d4b8cb-wb8cc"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.806600 4787 scope.go:117] "RemoveContainer" containerID="48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.840076 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.845857 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.858488 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-server-conf" (OuterVolumeSpecName: "server-conf") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870636 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870672 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870683 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb1abb80-0591-49c7-b549-969066392a5a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870695 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csxbl\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-kube-api-access-csxbl\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870705 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb1abb80-0591-49c7-b549-969066392a5a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870727 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870736 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.870745 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb1abb80-0591-49c7-b549-969066392a5a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.880898 4787 scope.go:117] "RemoveContainer" containerID="6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.887928 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd\": container with ID starting with 6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd not found: ID does not exist" containerID="6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.887990 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd"} err="failed to get container status \"6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd\": rpc error: code = NotFound desc = could not find container \"6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd\": container with ID starting with 6cdb0b4a79d30f65382fe7796ad69da1b399e7819b04729b9d23c83ac2019efd not found: ID does not exist" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.888019 4787 scope.go:117] "RemoveContainer" containerID="48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d" Jan 26 18:06:47 crc kubenswrapper[4787]: E0126 18:06:47.888725 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d\": container with ID starting with 48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d not found: ID does not exist" containerID="48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.888909 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d"} err="failed to get container status \"48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d\": rpc error: code = NotFound desc = could not find container \"48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d\": container with ID starting with 48fcdf87cd87f3221984c9dfa9b9099e219a1e8163bb6e7eae4d5dfeb334b11d not found: ID does not exist" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.889035 4787 scope.go:117] "RemoveContainer" containerID="0b9197c0a132a15fde8443730e1e6afbd7356943bd0400304fb06f44082983a1" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.893704 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.899295 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bb1abb80-0591-49c7-b549-969066392a5a" (UID: "bb1abb80-0591-49c7-b549-969066392a5a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.916198 4787 scope.go:117] "RemoveContainer" containerID="53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.972291 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb1abb80-0591-49c7-b549-969066392a5a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.972339 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:47 crc kubenswrapper[4787]: I0126 18:06:47.980297 4787 scope.go:117] "RemoveContainer" containerID="2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.041929 4787 scope.go:117] "RemoveContainer" containerID="53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.042537 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad\": container with ID starting with 53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad not found: ID does not exist" containerID="53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.042582 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad"} err="failed to get container status \"53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad\": rpc error: code = NotFound desc = could not find container \"53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad\": container with ID starting with 53a9b72d51b337b28af1252e0978f36efd026f2f95037f7e9eb9bb499802f5ad not found: ID does not exist" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.042731 4787 scope.go:117] "RemoveContainer" containerID="2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.043334 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c\": container with ID starting with 2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c not found: ID does not exist" containerID="2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.043365 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c"} err="failed to get container status \"2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c\": rpc error: code = NotFound desc = could not find container \"2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c\": container with ID starting with 2b379393d0b31ecf8aa13c5a88a98afc7fc3203b9cf084db467cce0014184e9c not found: ID does not exist" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.149412 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5rlw8_b55e012f-76df-4721-8be3-dba72f37cf33/ovn-controller/0.log" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.149492 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.162401 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.167836 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.276735 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-combined-ca-bundle\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.276793 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9sdj\" (UniqueName: \"kubernetes.io/projected/b55e012f-76df-4721-8be3-dba72f37cf33-kube-api-access-b9sdj\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.276879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run-ovn\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.277004 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.277071 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-log-ovn\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.277102 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-ovn-controller-tls-certs\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.277234 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b55e012f-76df-4721-8be3-dba72f37cf33-scripts\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.277146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278284 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b55e012f-76df-4721-8be3-dba72f37cf33-scripts" (OuterVolumeSpecName: "scripts") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278347 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run\") pod \"b55e012f-76df-4721-8be3-dba72f37cf33\" (UID: \"b55e012f-76df-4721-8be3-dba72f37cf33\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278423 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run" (OuterVolumeSpecName: "var-run") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278872 4787 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278892 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b55e012f-76df-4721-8be3-dba72f37cf33-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278903 4787 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.278917 4787 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b55e012f-76df-4721-8be3-dba72f37cf33-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.307581 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55e012f-76df-4721-8be3-dba72f37cf33-kube-api-access-b9sdj" (OuterVolumeSpecName: "kube-api-access-b9sdj") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "kube-api-access-b9sdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.309901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.340622 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.366291 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "b55e012f-76df-4721-8be3-dba72f37cf33" (UID: "b55e012f-76df-4721-8be3-dba72f37cf33"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.380361 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.380403 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9sdj\" (UniqueName: \"kubernetes.io/projected/b55e012f-76df-4721-8be3-dba72f37cf33-kube-api-access-b9sdj\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.380416 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b55e012f-76df-4721-8be3-dba72f37cf33-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481207 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-scripts\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481267 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-ceilometer-tls-certs\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481292 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-combined-ca-bundle\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481778 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhnp5\" (UniqueName: \"kubernetes.io/projected/b97fef39-62f3-4457-924d-4b25c40fe88d-kube-api-access-qhnp5\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481853 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-log-httpd\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481877 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-config-data\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.481909 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-run-httpd\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.482002 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-sg-core-conf-yaml\") pod \"b97fef39-62f3-4457-924d-4b25c40fe88d\" (UID: \"b97fef39-62f3-4457-924d-4b25c40fe88d\") " Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.484074 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.485307 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.490170 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-scripts" (OuterVolumeSpecName: "scripts") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.493355 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97fef39-62f3-4457-924d-4b25c40fe88d-kube-api-access-qhnp5" (OuterVolumeSpecName: "kube-api-access-qhnp5") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "kube-api-access-qhnp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.505456 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.526432 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.559901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.566003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-config-data" (OuterVolumeSpecName: "config-data") pod "b97fef39-62f3-4457-924d-4b25c40fe88d" (UID: "b97fef39-62f3-4457-924d-4b25c40fe88d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583862 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583897 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583910 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583924 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583936 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhnp5\" (UniqueName: \"kubernetes.io/projected/b97fef39-62f3-4457-924d-4b25c40fe88d-kube-api-access-qhnp5\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583962 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583973 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97fef39-62f3-4457-924d-4b25c40fe88d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.583983 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97fef39-62f3-4457-924d-4b25c40fe88d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.744022 4787 generic.go:334] "Generic (PLEG): container finished" podID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerID="33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27" exitCode=0 Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.744183 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerDied","Data":"33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27"} Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.744229 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b97fef39-62f3-4457-924d-4b25c40fe88d","Type":"ContainerDied","Data":"2dce9a4909e55352399e54211cb86252ec74af8707e45efcb033b1175108d692"} Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.744269 4787 scope.go:117] "RemoveContainer" containerID="2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.745172 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.755807 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5rlw8_b55e012f-76df-4721-8be3-dba72f37cf33/ovn-controller/0.log" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.755858 4787 generic.go:334] "Generic (PLEG): container finished" podID="b55e012f-76df-4721-8be3-dba72f37cf33" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" exitCode=137 Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.755887 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8" event={"ID":"b55e012f-76df-4721-8be3-dba72f37cf33","Type":"ContainerDied","Data":"0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e"} Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.755909 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5rlw8" event={"ID":"b55e012f-76df-4721-8be3-dba72f37cf33","Type":"ContainerDied","Data":"6a394825216913acacc5a2c40fedf76af28b984380403144b044d6e21a2d2aac"} Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.755971 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5rlw8" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.770844 4787 scope.go:117] "RemoveContainer" containerID="75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.801497 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.808677 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.825174 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5rlw8"] Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.827190 4787 scope.go:117] "RemoveContainer" containerID="33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.833049 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5rlw8"] Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.850102 4787 scope.go:117] "RemoveContainer" containerID="85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.872615 4787 scope.go:117] "RemoveContainer" containerID="2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.873149 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb\": container with ID starting with 2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb not found: ID does not exist" containerID="2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.873184 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb"} err="failed to get container status \"2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb\": rpc error: code = NotFound desc = could not find container \"2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb\": container with ID starting with 2b7a7f02cd570bdc219a450c4ed0d1575641366f3f65e9814aff6f5ff08d5dbb not found: ID does not exist" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.873206 4787 scope.go:117] "RemoveContainer" containerID="75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.873497 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c\": container with ID starting with 75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c not found: ID does not exist" containerID="75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.873516 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c"} err="failed to get container status \"75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c\": rpc error: code = NotFound desc = could not find container \"75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c\": container with ID starting with 75f378f0e093dee5208f2ff0500713240be3b339bb5c9d197744739721872a5c not found: ID does not exist" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.873527 4787 scope.go:117] "RemoveContainer" containerID="33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.873927 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27\": container with ID starting with 33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27 not found: ID does not exist" containerID="33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.874022 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27"} err="failed to get container status \"33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27\": rpc error: code = NotFound desc = could not find container \"33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27\": container with ID starting with 33bb09a563f63fa44f0b47e66482499c550a16edff1bcad130e7351bb184fb27 not found: ID does not exist" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.874064 4787 scope.go:117] "RemoveContainer" containerID="85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.874443 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898\": container with ID starting with 85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898 not found: ID does not exist" containerID="85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.874468 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898"} err="failed to get container status \"85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898\": rpc error: code = NotFound desc = could not find container \"85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898\": container with ID starting with 85ec9d0c67cdf7314e54beeb85ddb55a427ccb485ea224eec2a39e8b9dd78898 not found: ID does not exist" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.874483 4787 scope.go:117] "RemoveContainer" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.897778 4787 scope.go:117] "RemoveContainer" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" Jan 26 18:06:48 crc kubenswrapper[4787]: E0126 18:06:48.898447 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e\": container with ID starting with 0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e not found: ID does not exist" containerID="0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e" Jan 26 18:06:48 crc kubenswrapper[4787]: I0126 18:06:48.898486 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e"} err="failed to get container status \"0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e\": rpc error: code = NotFound desc = could not find container \"0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e\": container with ID starting with 0eedec167d33dfc3ffad370aea7e44dfa8ab3a47c02c8781a6edb3e0e011172e not found: ID does not exist" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.232041 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.396934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkwkp\" (UniqueName: \"kubernetes.io/projected/38c9405f-74d3-4282-90ac-0a9909a68b43-kube-api-access-rkwkp\") pod \"38c9405f-74d3-4282-90ac-0a9909a68b43\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397021 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data\") pod \"38c9405f-74d3-4282-90ac-0a9909a68b43\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-combined-ca-bundle\") pod \"38c9405f-74d3-4282-90ac-0a9909a68b43\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397199 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c9405f-74d3-4282-90ac-0a9909a68b43-etc-machine-id\") pod \"38c9405f-74d3-4282-90ac-0a9909a68b43\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397235 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts\") pod \"38c9405f-74d3-4282-90ac-0a9909a68b43\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397287 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom\") pod \"38c9405f-74d3-4282-90ac-0a9909a68b43\" (UID: \"38c9405f-74d3-4282-90ac-0a9909a68b43\") " Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397356 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38c9405f-74d3-4282-90ac-0a9909a68b43-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "38c9405f-74d3-4282-90ac-0a9909a68b43" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.397541 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c9405f-74d3-4282-90ac-0a9909a68b43-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.401092 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c9405f-74d3-4282-90ac-0a9909a68b43-kube-api-access-rkwkp" (OuterVolumeSpecName: "kube-api-access-rkwkp") pod "38c9405f-74d3-4282-90ac-0a9909a68b43" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43"). InnerVolumeSpecName "kube-api-access-rkwkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.416297 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38c9405f-74d3-4282-90ac-0a9909a68b43" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.416705 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts" (OuterVolumeSpecName: "scripts") pod "38c9405f-74d3-4282-90ac-0a9909a68b43" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.435087 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38c9405f-74d3-4282-90ac-0a9909a68b43" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.467572 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data" (OuterVolumeSpecName: "config-data") pod "38c9405f-74d3-4282-90ac-0a9909a68b43" (UID: "38c9405f-74d3-4282-90ac-0a9909a68b43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.498932 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.499012 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkwkp\" (UniqueName: \"kubernetes.io/projected/38c9405f-74d3-4282-90ac-0a9909a68b43-kube-api-access-rkwkp\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.499024 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.499037 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.499047 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c9405f-74d3-4282-90ac-0a9909a68b43-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.601562 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" path="/var/lib/kubelet/pods/57e65f25-43dd-4baf-b2fa-7256dcbd452d/volumes" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.603295 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" path="/var/lib/kubelet/pods/b55e012f-76df-4721-8be3-dba72f37cf33/volumes" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.604375 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" path="/var/lib/kubelet/pods/b97fef39-62f3-4457-924d-4b25c40fe88d/volumes" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.606660 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1abb80-0591-49c7-b549-969066392a5a" path="/var/lib/kubelet/pods/bb1abb80-0591-49c7-b549-969066392a5a/volumes" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.607675 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85cc53a-0132-4491-82a1-056badced30c" path="/var/lib/kubelet/pods/d85cc53a-0132-4491-82a1-056badced30c/volumes" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.805427 4787 generic.go:334] "Generic (PLEG): container finished" podID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerID="7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a" exitCode=0 Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.805479 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c9405f-74d3-4282-90ac-0a9909a68b43","Type":"ContainerDied","Data":"7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a"} Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.805516 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.805533 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c9405f-74d3-4282-90ac-0a9909a68b43","Type":"ContainerDied","Data":"952d881ce88b2a48c39e97927a6e8de59c892815eecfc3a2709c9a25eafc8600"} Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.805555 4787 scope.go:117] "RemoveContainer" containerID="a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.827446 4787 scope.go:117] "RemoveContainer" containerID="7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.840561 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.848883 4787 scope.go:117] "RemoveContainer" containerID="a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d" Jan 26 18:06:49 crc kubenswrapper[4787]: E0126 18:06:49.849397 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d\": container with ID starting with a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d not found: ID does not exist" containerID="a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.849433 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d"} err="failed to get container status \"a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d\": rpc error: code = NotFound desc = could not find container \"a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d\": container with ID starting with a2d43c8ecf3c16676b61389f13cc57bc0731e078c3824cda13e0d43c59518c0d not found: ID does not exist" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.849454 4787 scope.go:117] "RemoveContainer" containerID="7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a" Jan 26 18:06:49 crc kubenswrapper[4787]: E0126 18:06:49.849702 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a\": container with ID starting with 7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a not found: ID does not exist" containerID="7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.849731 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a"} err="failed to get container status \"7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a\": rpc error: code = NotFound desc = could not find container \"7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a\": container with ID starting with 7d569047c9a91c4a14db6f8cf0d6264dcb79a10b5edc6ebd18f24dfc6876134a not found: ID does not exist" Jan 26 18:06:49 crc kubenswrapper[4787]: I0126 18:06:49.850900 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.635676 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" path="/var/lib/kubelet/pods/38c9405f-74d3-4282-90ac-0a9909a68b43/volumes" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.764616 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86f4885877-fz869" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.831199 4787 generic.go:334] "Generic (PLEG): container finished" podID="08858ab3-fd32-43dd-8002-bb2b01216237" containerID="3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3" exitCode=0 Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.831248 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86f4885877-fz869" event={"ID":"08858ab3-fd32-43dd-8002-bb2b01216237","Type":"ContainerDied","Data":"3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3"} Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.831278 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86f4885877-fz869" event={"ID":"08858ab3-fd32-43dd-8002-bb2b01216237","Type":"ContainerDied","Data":"53c6644643e6ea263073aaceabc26647bf6368f984feae1bf3e180a85706ae15"} Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.831298 4787 scope.go:117] "RemoveContainer" containerID="8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.831297 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86f4885877-fz869" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.858768 4787 scope.go:117] "RemoveContainer" containerID="3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.881076 4787 scope.go:117] "RemoveContainer" containerID="8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd" Jan 26 18:06:51 crc kubenswrapper[4787]: E0126 18:06:51.881498 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd\": container with ID starting with 8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd not found: ID does not exist" containerID="8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.881561 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd"} err="failed to get container status \"8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd\": rpc error: code = NotFound desc = could not find container \"8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd\": container with ID starting with 8cf6e5f44a9e7cbd96f5a30ec5a1bd7a15c3805eda1ff67d635a342861d1eecd not found: ID does not exist" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.881590 4787 scope.go:117] "RemoveContainer" containerID="3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3" Jan 26 18:06:51 crc kubenswrapper[4787]: E0126 18:06:51.881873 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3\": container with ID starting with 3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3 not found: ID does not exist" containerID="3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.881904 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3"} err="failed to get container status \"3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3\": rpc error: code = NotFound desc = could not find container \"3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3\": container with ID starting with 3a63c3bea8153841d2a26cd90fc646a4dc8a23f3b732467503464082bf534ba3 not found: ID does not exist" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.938887 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-config\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.938931 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-internal-tls-certs\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.939013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hgsm\" (UniqueName: \"kubernetes.io/projected/08858ab3-fd32-43dd-8002-bb2b01216237-kube-api-access-2hgsm\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.939102 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-ovndb-tls-certs\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.939132 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-public-tls-certs\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.939168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-httpd-config\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.939206 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-combined-ca-bundle\") pod \"08858ab3-fd32-43dd-8002-bb2b01216237\" (UID: \"08858ab3-fd32-43dd-8002-bb2b01216237\") " Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.945084 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08858ab3-fd32-43dd-8002-bb2b01216237-kube-api-access-2hgsm" (OuterVolumeSpecName: "kube-api-access-2hgsm") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "kube-api-access-2hgsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.945776 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.979215 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.984176 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-config" (OuterVolumeSpecName: "config") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.993754 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.995371 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:51 crc kubenswrapper[4787]: I0126 18:06:51.999237 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "08858ab3-fd32-43dd-8002-bb2b01216237" (UID: "08858ab3-fd32-43dd-8002-bb2b01216237"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.040544 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.040590 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.040605 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hgsm\" (UniqueName: \"kubernetes.io/projected/08858ab3-fd32-43dd-8002-bb2b01216237-kube-api-access-2hgsm\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.040616 4787 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.040625 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.041133 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.041276 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08858ab3-fd32-43dd-8002-bb2b01216237-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.176768 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86f4885877-fz869"] Jan 26 18:06:52 crc kubenswrapper[4787]: I0126 18:06:52.186031 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86f4885877-fz869"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.589707 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.590608 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.591235 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.591290 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.591296 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.593117 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.594908 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:52 crc kubenswrapper[4787]: E0126 18:06:52.595009 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:06:53 crc kubenswrapper[4787]: I0126 18:06:53.598140 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" path="/var/lib/kubelet/pods/08858ab3-fd32-43dd-8002-bb2b01216237/volumes" Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.590338 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.591579 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.593021 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.593309 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.593346 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.597380 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.598839 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:06:57 crc kubenswrapper[4787]: E0126 18:06:57.598903 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.589481 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.590201 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.591121 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.591174 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.591421 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.592907 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.595693 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:07:02 crc kubenswrapper[4787]: E0126 18:07:02.595751 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.590184 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.590974 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.591376 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.591455 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.591736 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.593362 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.596166 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 26 18:07:07 crc kubenswrapper[4787]: E0126 18:07:07.596258 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hpbh5" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.521442 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hpbh5_254806cc-4007-4a34-9852-0716b123830f/ovs-vswitchd/0.log" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.522856 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.533216 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709583 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-run\") pod \"254806cc-4007-4a34-9852-0716b123830f\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709640 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") pod \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709675 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-lock\") pod \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709725 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphmh\" (UniqueName: \"kubernetes.io/projected/254806cc-4007-4a34-9852-0716b123830f-kube-api-access-mphmh\") pod \"254806cc-4007-4a34-9852-0716b123830f\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709723 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-run" (OuterVolumeSpecName: "var-run") pod "254806cc-4007-4a34-9852-0716b123830f" (UID: "254806cc-4007-4a34-9852-0716b123830f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709749 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-lib\") pod \"254806cc-4007-4a34-9852-0716b123830f\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709796 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709854 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84lk9\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-kube-api-access-84lk9\") pod \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709882 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba57dd7-3de5-4e62-817d-4fc2c295ddee-combined-ca-bundle\") pod \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-lib" (OuterVolumeSpecName: "var-lib") pod "254806cc-4007-4a34-9852-0716b123830f" (UID: "254806cc-4007-4a34-9852-0716b123830f"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.709924 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/254806cc-4007-4a34-9852-0716b123830f-scripts\") pod \"254806cc-4007-4a34-9852-0716b123830f\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710021 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-log\") pod \"254806cc-4007-4a34-9852-0716b123830f\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710081 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-cache\") pod \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\" (UID: \"fba57dd7-3de5-4e62-817d-4fc2c295ddee\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710107 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-etc-ovs\") pod \"254806cc-4007-4a34-9852-0716b123830f\" (UID: \"254806cc-4007-4a34-9852-0716b123830f\") " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710482 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-lock" (OuterVolumeSpecName: "lock") pod "fba57dd7-3de5-4e62-817d-4fc2c295ddee" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710634 4787 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710648 4787 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-lock\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710660 4787 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-lib\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710634 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "254806cc-4007-4a34-9852-0716b123830f" (UID: "254806cc-4007-4a34-9852-0716b123830f"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.710688 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-log" (OuterVolumeSpecName: "var-log") pod "254806cc-4007-4a34-9852-0716b123830f" (UID: "254806cc-4007-4a34-9852-0716b123830f"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.711065 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-cache" (OuterVolumeSpecName: "cache") pod "fba57dd7-3de5-4e62-817d-4fc2c295ddee" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.711396 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254806cc-4007-4a34-9852-0716b123830f-scripts" (OuterVolumeSpecName: "scripts") pod "254806cc-4007-4a34-9852-0716b123830f" (UID: "254806cc-4007-4a34-9852-0716b123830f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.715611 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-kube-api-access-84lk9" (OuterVolumeSpecName: "kube-api-access-84lk9") pod "fba57dd7-3de5-4e62-817d-4fc2c295ddee" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee"). InnerVolumeSpecName "kube-api-access-84lk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.715913 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fba57dd7-3de5-4e62-817d-4fc2c295ddee" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.716208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "fba57dd7-3de5-4e62-817d-4fc2c295ddee" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.717414 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254806cc-4007-4a34-9852-0716b123830f-kube-api-access-mphmh" (OuterVolumeSpecName: "kube-api-access-mphmh") pod "254806cc-4007-4a34-9852-0716b123830f" (UID: "254806cc-4007-4a34-9852-0716b123830f"). InnerVolumeSpecName "kube-api-access-mphmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812421 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphmh\" (UniqueName: \"kubernetes.io/projected/254806cc-4007-4a34-9852-0716b123830f-kube-api-access-mphmh\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812505 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812526 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84lk9\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-kube-api-access-84lk9\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812541 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/254806cc-4007-4a34-9852-0716b123830f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812554 4787 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812565 4787 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fba57dd7-3de5-4e62-817d-4fc2c295ddee-cache\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812576 4787 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/254806cc-4007-4a34-9852-0716b123830f-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.812589 4787 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fba57dd7-3de5-4e62-817d-4fc2c295ddee-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.828306 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.913723 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:10 crc kubenswrapper[4787]: I0126 18:07:10.984829 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba57dd7-3de5-4e62-817d-4fc2c295ddee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fba57dd7-3de5-4e62-817d-4fc2c295ddee" (UID: "fba57dd7-3de5-4e62-817d-4fc2c295ddee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.009884 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hpbh5_254806cc-4007-4a34-9852-0716b123830f/ovs-vswitchd/0.log" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.010828 4787 generic.go:334] "Generic (PLEG): container finished" podID="254806cc-4007-4a34-9852-0716b123830f" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" exitCode=137 Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.010909 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerDied","Data":"1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039"} Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.010940 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hpbh5" event={"ID":"254806cc-4007-4a34-9852-0716b123830f","Type":"ContainerDied","Data":"0b23964af3115e559b81d945ed7ff24477f6fbce806bcee883546e58487ca21b"} Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.010983 4787 scope.go:117] "RemoveContainer" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.011118 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hpbh5" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.014840 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba57dd7-3de5-4e62-817d-4fc2c295ddee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.021430 4787 generic.go:334] "Generic (PLEG): container finished" podID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerID="d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf" exitCode=137 Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.021485 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf"} Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.021516 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fba57dd7-3de5-4e62-817d-4fc2c295ddee","Type":"ContainerDied","Data":"82ed9b31c8b2eedc4f6be3465b840d841b7815ed34c3ec896042eec0a91956b9"} Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.021588 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.066905 4787 scope.go:117] "RemoveContainer" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.080374 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-hpbh5"] Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.092114 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-hpbh5"] Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.098616 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.103004 4787 scope.go:117] "RemoveContainer" containerID="111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.105604 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.124834 4787 scope.go:117] "RemoveContainer" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.125624 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039\": container with ID starting with 1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039 not found: ID does not exist" containerID="1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.125662 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039"} err="failed to get container status \"1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039\": rpc error: code = NotFound desc = could not find container \"1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039\": container with ID starting with 1453eecd5629104d3124dfca7443e316fa03d2a760e16f6a1dc0fc0b2b68d039 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.125686 4787 scope.go:117] "RemoveContainer" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.126140 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f\": container with ID starting with 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f not found: ID does not exist" containerID="71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.126182 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f"} err="failed to get container status \"71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f\": rpc error: code = NotFound desc = could not find container \"71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f\": container with ID starting with 71299fadbbdbd0c7e74f01f559518510539b4d4b710d26278ba505bb3734e19f not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.126213 4787 scope.go:117] "RemoveContainer" containerID="111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.126623 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b\": container with ID starting with 111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b not found: ID does not exist" containerID="111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.126651 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b"} err="failed to get container status \"111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b\": rpc error: code = NotFound desc = could not find container \"111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b\": container with ID starting with 111e76b751f3779f63f982017c38316841bd21e662199d932c690865c1218e3b not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.126667 4787 scope.go:117] "RemoveContainer" containerID="d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.150643 4787 scope.go:117] "RemoveContainer" containerID="9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.168766 4787 scope.go:117] "RemoveContainer" containerID="9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.187622 4787 scope.go:117] "RemoveContainer" containerID="bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.207153 4787 scope.go:117] "RemoveContainer" containerID="a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.228348 4787 scope.go:117] "RemoveContainer" containerID="3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.248390 4787 scope.go:117] "RemoveContainer" containerID="7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.267368 4787 scope.go:117] "RemoveContainer" containerID="51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.288322 4787 scope.go:117] "RemoveContainer" containerID="0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.313488 4787 scope.go:117] "RemoveContainer" containerID="cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.337846 4787 scope.go:117] "RemoveContainer" containerID="8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.358586 4787 scope.go:117] "RemoveContainer" containerID="20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.380026 4787 scope.go:117] "RemoveContainer" containerID="c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.401240 4787 scope.go:117] "RemoveContainer" containerID="98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.422338 4787 scope.go:117] "RemoveContainer" containerID="ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.444332 4787 scope.go:117] "RemoveContainer" containerID="d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.444819 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf\": container with ID starting with d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf not found: ID does not exist" containerID="d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.444881 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf"} err="failed to get container status \"d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf\": rpc error: code = NotFound desc = could not find container \"d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf\": container with ID starting with d3705f3a815b27ad87d58163350bf134c5247d86dd0855e55b0f6471ae5baebf not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.444916 4787 scope.go:117] "RemoveContainer" containerID="9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.445477 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892\": container with ID starting with 9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892 not found: ID does not exist" containerID="9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.445523 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892"} err="failed to get container status \"9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892\": rpc error: code = NotFound desc = could not find container \"9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892\": container with ID starting with 9916171f49c6771386486044e9599e51d2613a46033096025c8d095e3a4aa892 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.445549 4787 scope.go:117] "RemoveContainer" containerID="9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.446168 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3\": container with ID starting with 9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3 not found: ID does not exist" containerID="9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.446200 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3"} err="failed to get container status \"9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3\": rpc error: code = NotFound desc = could not find container \"9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3\": container with ID starting with 9bd657c0956565c7ff1100473b4de65921cd3275224a98f97bb2b5de4e9beed3 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.446217 4787 scope.go:117] "RemoveContainer" containerID="bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.446506 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8\": container with ID starting with bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8 not found: ID does not exist" containerID="bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.446532 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8"} err="failed to get container status \"bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8\": rpc error: code = NotFound desc = could not find container \"bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8\": container with ID starting with bd8e7cf8ca593a2fdcea93661b33e31e92f38586e6ee2117af99c11daa954fd8 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.446564 4787 scope.go:117] "RemoveContainer" containerID="a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.446763 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217\": container with ID starting with a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217 not found: ID does not exist" containerID="a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.446787 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217"} err="failed to get container status \"a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217\": rpc error: code = NotFound desc = could not find container \"a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217\": container with ID starting with a0bc56bda7f68f7226b0776b7ec01279d5ce6c663e85228b1550d5a42737a217 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.446801 4787 scope.go:117] "RemoveContainer" containerID="3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.447173 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb\": container with ID starting with 3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb not found: ID does not exist" containerID="3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.447218 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb"} err="failed to get container status \"3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb\": rpc error: code = NotFound desc = could not find container \"3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb\": container with ID starting with 3944bae2d00fa65ef51eef94b3eae8978efa014c114f7e17aeb264f4d0580ebb not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.447237 4787 scope.go:117] "RemoveContainer" containerID="7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.447578 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae\": container with ID starting with 7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae not found: ID does not exist" containerID="7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.447602 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae"} err="failed to get container status \"7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae\": rpc error: code = NotFound desc = could not find container \"7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae\": container with ID starting with 7b8ef9736665dd1cdd04c341338f4e9ec44a43f03cd1be64bb3dd552889df9ae not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.447616 4787 scope.go:117] "RemoveContainer" containerID="51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.448040 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d\": container with ID starting with 51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d not found: ID does not exist" containerID="51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.448066 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d"} err="failed to get container status \"51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d\": rpc error: code = NotFound desc = could not find container \"51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d\": container with ID starting with 51eff1955c96ffa61d888ed06b9b4b35494bc08586f73e8c585923f226a1864d not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.448086 4787 scope.go:117] "RemoveContainer" containerID="0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.448466 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866\": container with ID starting with 0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866 not found: ID does not exist" containerID="0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.448508 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866"} err="failed to get container status \"0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866\": rpc error: code = NotFound desc = could not find container \"0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866\": container with ID starting with 0ed4bfb14519a155a04c08382c42ce1d58a899012a2f2393dec0945ddefa5866 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.448533 4787 scope.go:117] "RemoveContainer" containerID="cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.448857 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41\": container with ID starting with cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41 not found: ID does not exist" containerID="cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.448888 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41"} err="failed to get container status \"cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41\": rpc error: code = NotFound desc = could not find container \"cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41\": container with ID starting with cc19f6edd2cb61c55d23f684e37c5e98dc8ca72af4b92e5d41101a96ca5b9b41 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.448910 4787 scope.go:117] "RemoveContainer" containerID="8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.449274 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2\": container with ID starting with 8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2 not found: ID does not exist" containerID="8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.449297 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2"} err="failed to get container status \"8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2\": rpc error: code = NotFound desc = could not find container \"8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2\": container with ID starting with 8365c7d244461d5aa32f616b13ccd8b47acc1cf73130d8db7f2253994bf9e3e2 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.449312 4787 scope.go:117] "RemoveContainer" containerID="20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.449578 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8\": container with ID starting with 20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8 not found: ID does not exist" containerID="20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.449600 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8"} err="failed to get container status \"20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8\": rpc error: code = NotFound desc = could not find container \"20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8\": container with ID starting with 20d949446cedde894047f9da9e42657b0e7c986fe9c60d4e2f0a7943f0d7efd8 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.449613 4787 scope.go:117] "RemoveContainer" containerID="c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.449877 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0\": container with ID starting with c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0 not found: ID does not exist" containerID="c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.449915 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0"} err="failed to get container status \"c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0\": rpc error: code = NotFound desc = could not find container \"c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0\": container with ID starting with c7ba2e1d00b181854b10d085dbf9fc6a93be4d79811ed40f84bc95db0f2bf5f0 not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.449935 4787 scope.go:117] "RemoveContainer" containerID="98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.450282 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c\": container with ID starting with 98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c not found: ID does not exist" containerID="98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.450336 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c"} err="failed to get container status \"98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c\": rpc error: code = NotFound desc = could not find container \"98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c\": container with ID starting with 98064e831adbdcef4ba31a1976566189444d71c29807ae133ce3844b084cb08c not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.450355 4787 scope.go:117] "RemoveContainer" containerID="ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f" Jan 26 18:07:11 crc kubenswrapper[4787]: E0126 18:07:11.450689 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f\": container with ID starting with ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f not found: ID does not exist" containerID="ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.450717 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f"} err="failed to get container status \"ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f\": rpc error: code = NotFound desc = could not find container \"ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f\": container with ID starting with ca31b18e778268847d00288ef95534092054de020c8217ae738effffff8c2e1f not found: ID does not exist" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.601917 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254806cc-4007-4a34-9852-0716b123830f" path="/var/lib/kubelet/pods/254806cc-4007-4a34-9852-0716b123830f/volumes" Jan 26 18:07:11 crc kubenswrapper[4787]: I0126 18:07:11.603378 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" path="/var/lib/kubelet/pods/fba57dd7-3de5-4e62-817d-4fc2c295ddee/volumes" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064094 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g5kmz"] Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064644 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="galera" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064656 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="galera" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064670 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064676 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064685 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-updater" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064691 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-updater" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064698 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="openstack-network-exporter" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064704 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="openstack-network-exporter" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064711 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064718 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064729 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" containerName="nova-cell1-conductor-conductor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064735 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" containerName="nova-cell1-conductor-conductor" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064743 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-metadata" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064748 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-metadata" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064755 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="rabbitmq" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064761 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="rabbitmq" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064770 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064776 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-server" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064788 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="proxy-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064794 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="proxy-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064801 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ba670d-d2d7-47aa-bc54-6da4d0e532f3" containerName="memcached" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064807 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ba670d-d2d7-47aa-bc54-6da4d0e532f3" containerName="memcached" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064815 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064822 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064829 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064844 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-api" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064849 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca591aea-a146-4b51-887e-9688a249fdad" containerName="kube-state-metrics" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064855 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca591aea-a146-4b51-887e-9688a249fdad" containerName="kube-state-metrics" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064865 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064871 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064881 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064887 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064895 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-reaper" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064901 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-reaper" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064909 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="swift-recon-cron" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064914 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="swift-recon-cron" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064923 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-updater" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064928 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-updater" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064937 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="ovn-northd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064957 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="ovn-northd" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064967 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="sg-core" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064972 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="sg-core" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064980 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server-init" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064986 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server-init" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.064993 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.064999 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065007 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065012 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065019 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cca919-781a-48fb-99c1-ec7ebbb7c601" containerName="nova-cell0-conductor-conductor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065027 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cca919-781a-48fb-99c1-ec7ebbb7c601" containerName="nova-cell0-conductor-conductor" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065038 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065044 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-api" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065053 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065059 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065067 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065072 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065082 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-notification-agent" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065088 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-notification-agent" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065096 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-expirer" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065102 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-expirer" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065111 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065117 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065123 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerName="mariadb-account-create-update" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065129 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerName="mariadb-account-create-update" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065138 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065144 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065153 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065159 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065168 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065174 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065184 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="probe" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065190 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="probe" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065198 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065204 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065211 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065217 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065223 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065230 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065239 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="setup-container" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065245 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="setup-container" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065252 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065258 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065265 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065270 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065277 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-central-agent" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065283 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-central-agent" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065290 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerName="mariadb-account-create-update" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065295 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerName="mariadb-account-create-update" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065303 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065308 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065316 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="cinder-scheduler" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065321 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="cinder-scheduler" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065329 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43c375f-1176-442e-98fd-5d9acba6e199" containerName="nova-scheduler-scheduler" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065335 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43c375f-1176-442e-98fd-5d9acba6e199" containerName="nova-scheduler-scheduler" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065342 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="rsync" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065347 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="rsync" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065356 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065363 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065374 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065379 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065387 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065393 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-server" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065402 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065408 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065419 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065425 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065435 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065441 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-server" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065450 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065456 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065465 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="setup-container" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065472 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="setup-container" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065484 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065492 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065500 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85cc53a-0132-4491-82a1-056badced30c" containerName="keystone-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065506 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85cc53a-0132-4491-82a1-056badced30c" containerName="keystone-api" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065515 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065522 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-api" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065530 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="rabbitmq" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065537 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="rabbitmq" Jan 26 18:07:15 crc kubenswrapper[4787]: E0126 18:07:15.065546 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="mysql-bootstrap" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065551 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="mysql-bootstrap" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065684 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerName="mariadb-account-create-update" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065696 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="openstack-network-exporter" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065706 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065713 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e65f25-43dd-4baf-b2fa-7256dcbd452d" containerName="rabbitmq" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065719 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="rsync" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065726 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065734 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovs-vswitchd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065739 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85cc53a-0132-4491-82a1-056badced30c" containerName="keystone-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065748 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-reaper" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065757 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065767 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cca919-781a-48fb-99c1-ec7ebbb7c601" containerName="nova-cell0-conductor-conductor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065774 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27796ea-3db5-42ad-8b22-e4d774e28578" containerName="barbican-keystone-listener-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065784 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065794 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065800 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065806 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8a7212-44ab-42f1-86be-8b79726fe4f8" containerName="placement-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065815 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065822 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065830 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065836 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065844 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ba670d-d2d7-47aa-bc54-6da4d0e532f3" containerName="memcached" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065851 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55e012f-76df-4721-8be3-dba72f37cf33" containerName="ovn-controller" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065860 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="08858ab3-fd32-43dd-8002-bb2b01216237" containerName="neutron-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065867 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="cinder-scheduler" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065878 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-updater" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065885 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="sg-core" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065892 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-notification-agent" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065901 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="account-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065907 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="container-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065916 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43c375f-1176-442e-98fd-5d9acba6e199" containerName="nova-scheduler-scheduler" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065924 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="254806cc-4007-4a34-9852-0716b123830f" containerName="ovsdb-server" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065932 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065939 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-auditor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065964 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="44392b57-bc6b-4a8b-8ff3-346fab2422af" containerName="cinder-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065971 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c9405f-74d3-4282-90ac-0a9909a68b43" containerName="probe" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065978 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-updater" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065985 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.065993 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066001 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-expirer" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066008 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52ed138-0046-4bf6-b8f0-7bd5fb016f16" containerName="mariadb-account-create-update" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066014 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd698097-04e8-4ad2-bc6e-fbdf16dfd12a" containerName="glance-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066021 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb1abb80-0591-49c7-b549-969066392a5a" containerName="rabbitmq" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066027 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="proxy-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066034 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066041 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e181982-6b54-4a6f-a52c-eb025b767fb0" containerName="barbican-worker" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066048 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="swift-recon-cron" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066056 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa430a7c-4527-4fdd-aab8-f0f2588ccdaa" containerName="barbican-api-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066065 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca591aea-a146-4b51-887e-9688a249fdad" containerName="kube-state-metrics" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066072 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-metadata" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066079 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e77a01-1165-4f0a-ad35-fe127b5ae6c0" containerName="nova-api-api" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066087 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd68dd7-739f-4cd0-b3eb-c786b79c4b40" containerName="nova-cell1-conductor-conductor" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066096 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c0c3c8-9282-45e5-b376-9c335e24573a" containerName="glance-httpd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066104 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="426dd23d-ce8f-4f72-aece-79585de1cef1" containerName="nova-metadata-log" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066113 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="208dca76-0c20-4fd9-a685-76144777c48c" containerName="ovn-northd" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066120 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3871ebb-6b25-4e36-a3a1-3e9a220768f5" containerName="galera" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066146 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba57dd7-3de5-4e62-817d-4fc2c295ddee" containerName="object-replicator" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.066158 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97fef39-62f3-4457-924d-4b25c40fe88d" containerName="ceilometer-central-agent" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.067047 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.077763 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5kmz"] Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.177607 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-utilities\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.177669 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42v5\" (UniqueName: \"kubernetes.io/projected/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-kube-api-access-x42v5\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.177689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-catalog-content\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.279208 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42v5\" (UniqueName: \"kubernetes.io/projected/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-kube-api-access-x42v5\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.279263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-catalog-content\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.279367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-utilities\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.279863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-utilities\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.279879 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-catalog-content\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.302752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42v5\" (UniqueName: \"kubernetes.io/projected/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-kube-api-access-x42v5\") pod \"redhat-operators-g5kmz\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.389162 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:15 crc kubenswrapper[4787]: I0126 18:07:15.620675 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5kmz"] Jan 26 18:07:16 crc kubenswrapper[4787]: I0126 18:07:16.066203 4787 generic.go:334] "Generic (PLEG): container finished" podID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerID="72a0f2fe0553805cdb805eabb923354c2f02211bc1494c7e70abd2a7a8d9c9a6" exitCode=0 Jan 26 18:07:16 crc kubenswrapper[4787]: I0126 18:07:16.066286 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerDied","Data":"72a0f2fe0553805cdb805eabb923354c2f02211bc1494c7e70abd2a7a8d9c9a6"} Jan 26 18:07:16 crc kubenswrapper[4787]: I0126 18:07:16.066590 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerStarted","Data":"4a5a489e44b7159e90e4f0ca398504fcf6a3cc69180e41c80c4b452ed01173d4"} Jan 26 18:07:16 crc kubenswrapper[4787]: I0126 18:07:16.067964 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:07:16 crc kubenswrapper[4787]: I0126 18:07:16.807503 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:07:16 crc kubenswrapper[4787]: I0126 18:07:16.807583 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:07:17 crc kubenswrapper[4787]: I0126 18:07:17.076085 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerStarted","Data":"fd4fc56fde979fd26e9a8c23fc5327f6c28df3500d36d216042be2af7b82d934"} Jan 26 18:07:18 crc kubenswrapper[4787]: I0126 18:07:18.088038 4787 generic.go:334] "Generic (PLEG): container finished" podID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerID="fd4fc56fde979fd26e9a8c23fc5327f6c28df3500d36d216042be2af7b82d934" exitCode=0 Jan 26 18:07:18 crc kubenswrapper[4787]: I0126 18:07:18.088137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerDied","Data":"fd4fc56fde979fd26e9a8c23fc5327f6c28df3500d36d216042be2af7b82d934"} Jan 26 18:07:20 crc kubenswrapper[4787]: I0126 18:07:20.110007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerStarted","Data":"991e7ea434d3bcc50e4d1a98ac374ec5fd500d0da602ce2f3e9e6d5210559245"} Jan 26 18:07:20 crc kubenswrapper[4787]: I0126 18:07:20.130254 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g5kmz" podStartSLOduration=2.229514615 podStartE2EDuration="5.130235183s" podCreationTimestamp="2026-01-26 18:07:15 +0000 UTC" firstStartedPulling="2026-01-26 18:07:16.067662222 +0000 UTC m=+1404.774798355" lastFinishedPulling="2026-01-26 18:07:18.96838276 +0000 UTC m=+1407.675518923" observedRunningTime="2026-01-26 18:07:20.129641479 +0000 UTC m=+1408.836777612" watchObservedRunningTime="2026-01-26 18:07:20.130235183 +0000 UTC m=+1408.837371316" Jan 26 18:07:25 crc kubenswrapper[4787]: I0126 18:07:25.390394 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:25 crc kubenswrapper[4787]: I0126 18:07:25.390787 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:25 crc kubenswrapper[4787]: I0126 18:07:25.466544 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:26 crc kubenswrapper[4787]: I0126 18:07:26.204036 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:27 crc kubenswrapper[4787]: I0126 18:07:27.254574 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5kmz"] Jan 26 18:07:28 crc kubenswrapper[4787]: I0126 18:07:28.176854 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g5kmz" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="registry-server" containerID="cri-o://991e7ea434d3bcc50e4d1a98ac374ec5fd500d0da602ce2f3e9e6d5210559245" gracePeriod=2 Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.203924 4787 generic.go:334] "Generic (PLEG): container finished" podID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerID="991e7ea434d3bcc50e4d1a98ac374ec5fd500d0da602ce2f3e9e6d5210559245" exitCode=0 Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.204026 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerDied","Data":"991e7ea434d3bcc50e4d1a98ac374ec5fd500d0da602ce2f3e9e6d5210559245"} Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.270597 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.421146 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-utilities\") pod \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.421279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-catalog-content\") pod \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.421367 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42v5\" (UniqueName: \"kubernetes.io/projected/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-kube-api-access-x42v5\") pod \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\" (UID: \"0cf8bcc1-32f3-47c2-9033-08cd4346caa8\") " Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.422332 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-utilities" (OuterVolumeSpecName: "utilities") pod "0cf8bcc1-32f3-47c2-9033-08cd4346caa8" (UID: "0cf8bcc1-32f3-47c2-9033-08cd4346caa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.422654 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.428013 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-kube-api-access-x42v5" (OuterVolumeSpecName: "kube-api-access-x42v5") pod "0cf8bcc1-32f3-47c2-9033-08cd4346caa8" (UID: "0cf8bcc1-32f3-47c2-9033-08cd4346caa8"). InnerVolumeSpecName "kube-api-access-x42v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.524537 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42v5\" (UniqueName: \"kubernetes.io/projected/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-kube-api-access-x42v5\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.546504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf8bcc1-32f3-47c2-9033-08cd4346caa8" (UID: "0cf8bcc1-32f3-47c2-9033-08cd4346caa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:07:31 crc kubenswrapper[4787]: I0126 18:07:31.625637 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8bcc1-32f3-47c2-9033-08cd4346caa8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.217787 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5kmz" event={"ID":"0cf8bcc1-32f3-47c2-9033-08cd4346caa8","Type":"ContainerDied","Data":"4a5a489e44b7159e90e4f0ca398504fcf6a3cc69180e41c80c4b452ed01173d4"} Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.218803 4787 scope.go:117] "RemoveContainer" containerID="991e7ea434d3bcc50e4d1a98ac374ec5fd500d0da602ce2f3e9e6d5210559245" Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.217917 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5kmz" Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.239373 4787 scope.go:117] "RemoveContainer" containerID="fd4fc56fde979fd26e9a8c23fc5327f6c28df3500d36d216042be2af7b82d934" Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.248009 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5kmz"] Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.253155 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g5kmz"] Jan 26 18:07:32 crc kubenswrapper[4787]: I0126 18:07:32.267790 4787 scope.go:117] "RemoveContainer" containerID="72a0f2fe0553805cdb805eabb923354c2f02211bc1494c7e70abd2a7a8d9c9a6" Jan 26 18:07:33 crc kubenswrapper[4787]: I0126 18:07:33.605415 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" path="/var/lib/kubelet/pods/0cf8bcc1-32f3-47c2-9033-08cd4346caa8/volumes" Jan 26 18:07:46 crc kubenswrapper[4787]: I0126 18:07:46.808320 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:07:46 crc kubenswrapper[4787]: I0126 18:07:46.808892 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:08:16 crc kubenswrapper[4787]: I0126 18:08:16.807351 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:08:16 crc kubenswrapper[4787]: I0126 18:08:16.807887 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:08:16 crc kubenswrapper[4787]: I0126 18:08:16.807936 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:08:16 crc kubenswrapper[4787]: I0126 18:08:16.808567 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"113c45a679e7cee59810e8e7b032bbf95c23e0a2fbc209f74960d3bd68199f7f"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:08:16 crc kubenswrapper[4787]: I0126 18:08:16.808628 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://113c45a679e7cee59810e8e7b032bbf95c23e0a2fbc209f74960d3bd68199f7f" gracePeriod=600 Jan 26 18:08:17 crc kubenswrapper[4787]: I0126 18:08:17.597828 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="113c45a679e7cee59810e8e7b032bbf95c23e0a2fbc209f74960d3bd68199f7f" exitCode=0 Jan 26 18:08:17 crc kubenswrapper[4787]: I0126 18:08:17.598520 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"113c45a679e7cee59810e8e7b032bbf95c23e0a2fbc209f74960d3bd68199f7f"} Jan 26 18:08:17 crc kubenswrapper[4787]: I0126 18:08:17.598566 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010"} Jan 26 18:08:17 crc kubenswrapper[4787]: I0126 18:08:17.598582 4787 scope.go:117] "RemoveContainer" containerID="4b92bcf71cd03a611c9ec00077b120f3d83120698e0ca3da40d94cf74a7cfe86" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.722681 4787 scope.go:117] "RemoveContainer" containerID="08412cf7c17fc076be21e7e9d94ce72409cda76ea7e6fc93e4179d5c528bcd32" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.766613 4787 scope.go:117] "RemoveContainer" containerID="9e3d9a7712dfab6b98c10d41a68524d0c0d57025f372b62b88317d0bc581d650" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.812376 4787 scope.go:117] "RemoveContainer" containerID="a23ae7ff6a07b75da6a80c10f0e64e10bcacdee7dd03abd01b57adb39db6c328" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.837926 4787 scope.go:117] "RemoveContainer" containerID="babd92495b61f372c94e077b7addc8748311b610d29565a0044d236beb024c22" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.894289 4787 scope.go:117] "RemoveContainer" containerID="78fd7352dd4bb70eb277dc141504486ede9cc9f8e80b4b963e4e1a0e6c7a32fe" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.955099 4787 scope.go:117] "RemoveContainer" containerID="6e211d25eb1387ae0d55fb3fd2f9760e74b3357db9c68e31d9f12e4ee86167d6" Jan 26 18:08:54 crc kubenswrapper[4787]: I0126 18:08:54.998419 4787 scope.go:117] "RemoveContainer" containerID="4ff750be119f1f06ba313c23cca75fec856e7fe43ccd9c80add76d84992460a9" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.026358 4787 scope.go:117] "RemoveContainer" containerID="6a7a7e77f74977071587720834a3241c8456c1b3eccbbadeffeef150ecaa3217" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.045540 4787 scope.go:117] "RemoveContainer" containerID="800837c14b59847183af47554bf1431e09ef8a6fe30976624c57af8e04666954" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.070082 4787 scope.go:117] "RemoveContainer" containerID="2cfdc087326a802bf3ea782b505c896a6531eb26d4b33f6b1a3a2cd365d9e4ff" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.095218 4787 scope.go:117] "RemoveContainer" containerID="b57b6191eff8819bd8e4ae8e4e345a4787572c4758499603a31165aceb4fec8d" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.116589 4787 scope.go:117] "RemoveContainer" containerID="67131b831d7bbc7854d3ec550e2e376edea9a5e823c31d4b239e4adbce5fe1ef" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.161563 4787 scope.go:117] "RemoveContainer" containerID="d9dcca694d36180d243ee6744b654bb1c2546c3b6d701787ebd89a9e1baab732" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.189886 4787 scope.go:117] "RemoveContainer" containerID="06773298540a7b1d6d315f901baf808e6df9fef5a0e024b4b99786ac15da836d" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.220140 4787 scope.go:117] "RemoveContainer" containerID="2a5be95646268c8afef7ee687a6bc6a9a2b009cf4c92fc8984f1d889785779dd" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.249882 4787 scope.go:117] "RemoveContainer" containerID="681caf359914c2e80c323e59ae7950bc367a24c1561406e9f672a0e4a771fefd" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.277238 4787 scope.go:117] "RemoveContainer" containerID="580fea9853fd3b084dca775b0f8aa270170f8c78ca977524159fe21d4e22535f" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.300188 4787 scope.go:117] "RemoveContainer" containerID="c45530d5418d7a3dd16e8692ccd60dfb284b1c5791080839a02dcb70faac766a" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.340689 4787 scope.go:117] "RemoveContainer" containerID="d1aa06f7ee36fc291a73b3de2f60e451ed508ff2df7347718492fffa571221ef" Jan 26 18:08:55 crc kubenswrapper[4787]: I0126 18:08:55.358743 4787 scope.go:117] "RemoveContainer" containerID="0e03dd74b5f5333431fff1168265eb890ce6b5d97fb05c99c3ba84322732584b" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.646233 4787 scope.go:117] "RemoveContainer" containerID="0f1cd29082fb000604fcb6b4fffed07a1c3618edde8d45607d7bd88655d15cab" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.673312 4787 scope.go:117] "RemoveContainer" containerID="fb5b68e300042cdeab819b0943f78da3bc13f53fb414daf36d4826013aa7717e" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.705441 4787 scope.go:117] "RemoveContainer" containerID="a8ad3c863e44c6cb36d5f2929056302a930b9a9f30cd9e81a85a5cdd8fa99a03" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.736873 4787 scope.go:117] "RemoveContainer" containerID="754c70da01cbbd2f3c84438f9bcdd85e6b3538dd0dfbc5892db769504466451b" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.763402 4787 scope.go:117] "RemoveContainer" containerID="a522c77b7f2e449b1626089cb405e0a35925fb71dd7b4046bb28319da9af07ef" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.785576 4787 scope.go:117] "RemoveContainer" containerID="c41ea7e4541f8199f1a664d5d047bd04b1cc721396791e7f0e6eb3e40078f9bf" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.809249 4787 scope.go:117] "RemoveContainer" containerID="2293ff8d95ba43bc1f85d8da8a1ae8475ca251485cffe8a028386e3c92834862" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.848441 4787 scope.go:117] "RemoveContainer" containerID="213534ee68f203e83fe67c526d43166c90766a50a742fdda59d2f15ece3e7409" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.876342 4787 scope.go:117] "RemoveContainer" containerID="ef6ddf453382e486a5a257bd406f89a0335e45350885d60d9c024a0b7b2ded47" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.898362 4787 scope.go:117] "RemoveContainer" containerID="d672f1c0ba676ceab5a813f55187b4369018d13a2c0eb61a432e5bca929c29df" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.922566 4787 scope.go:117] "RemoveContainer" containerID="0242a6446f4c396a03b7fdd54c363335eec1edaed3cce989c3a0bc0c3fdaf390" Jan 26 18:09:55 crc kubenswrapper[4787]: I0126 18:09:55.945396 4787 scope.go:117] "RemoveContainer" containerID="34bbf27ab9cda47766dbce09363fdf4948eabf48cb2cb8e92ec94ea5da30de08" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.640457 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h42"] Jan 26 18:10:07 crc kubenswrapper[4787]: E0126 18:10:07.641580 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="extract-content" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.641603 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="extract-content" Jan 26 18:10:07 crc kubenswrapper[4787]: E0126 18:10:07.641629 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="registry-server" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.641641 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="registry-server" Jan 26 18:10:07 crc kubenswrapper[4787]: E0126 18:10:07.641662 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="extract-utilities" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.641675 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="extract-utilities" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.641896 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf8bcc1-32f3-47c2-9033-08cd4346caa8" containerName="registry-server" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.643426 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.657624 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h42"] Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.762201 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-catalog-content\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.762601 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-utilities\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.762769 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvmr\" (UniqueName: \"kubernetes.io/projected/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-kube-api-access-cnvmr\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.863732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvmr\" (UniqueName: \"kubernetes.io/projected/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-kube-api-access-cnvmr\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.863824 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-catalog-content\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.863844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-utilities\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.864374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-utilities\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.864471 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-catalog-content\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.884072 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvmr\" (UniqueName: \"kubernetes.io/projected/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-kube-api-access-cnvmr\") pod \"redhat-marketplace-v7h42\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:07 crc kubenswrapper[4787]: I0126 18:10:07.978259 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:08 crc kubenswrapper[4787]: I0126 18:10:08.446531 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h42"] Jan 26 18:10:08 crc kubenswrapper[4787]: I0126 18:10:08.546653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h42" event={"ID":"ce3cb9f5-6e78-4dc5-afc6-f742217272b9","Type":"ContainerStarted","Data":"b07eebdc99b143b46658ac83661b74371a391cd6e413bded9c49b2670be63f60"} Jan 26 18:10:09 crc kubenswrapper[4787]: I0126 18:10:09.564566 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerID="5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7" exitCode=0 Jan 26 18:10:09 crc kubenswrapper[4787]: I0126 18:10:09.564662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h42" event={"ID":"ce3cb9f5-6e78-4dc5-afc6-f742217272b9","Type":"ContainerDied","Data":"5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7"} Jan 26 18:10:11 crc kubenswrapper[4787]: I0126 18:10:11.586865 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerID="4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984" exitCode=0 Jan 26 18:10:11 crc kubenswrapper[4787]: I0126 18:10:11.586916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h42" event={"ID":"ce3cb9f5-6e78-4dc5-afc6-f742217272b9","Type":"ContainerDied","Data":"4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984"} Jan 26 18:10:12 crc kubenswrapper[4787]: I0126 18:10:12.596646 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h42" event={"ID":"ce3cb9f5-6e78-4dc5-afc6-f742217272b9","Type":"ContainerStarted","Data":"9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8"} Jan 26 18:10:12 crc kubenswrapper[4787]: I0126 18:10:12.620073 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v7h42" podStartSLOduration=2.826401797 podStartE2EDuration="5.620050735s" podCreationTimestamp="2026-01-26 18:10:07 +0000 UTC" firstStartedPulling="2026-01-26 18:10:09.568462511 +0000 UTC m=+1578.275598684" lastFinishedPulling="2026-01-26 18:10:12.362111489 +0000 UTC m=+1581.069247622" observedRunningTime="2026-01-26 18:10:12.613134248 +0000 UTC m=+1581.320270381" watchObservedRunningTime="2026-01-26 18:10:12.620050735 +0000 UTC m=+1581.327186868" Jan 26 18:10:17 crc kubenswrapper[4787]: I0126 18:10:17.908101 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whr78"] Jan 26 18:10:17 crc kubenswrapper[4787]: I0126 18:10:17.910259 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:17 crc kubenswrapper[4787]: I0126 18:10:17.915179 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whr78"] Jan 26 18:10:17 crc kubenswrapper[4787]: I0126 18:10:17.978704 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:17 crc kubenswrapper[4787]: I0126 18:10:17.978958 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.020364 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.020593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-catalog-content\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.020641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-utilities\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.020684 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bz8c\" (UniqueName: \"kubernetes.io/projected/30c2b90d-99d4-477b-ab31-a318ca39da4f-kube-api-access-4bz8c\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.121596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bz8c\" (UniqueName: \"kubernetes.io/projected/30c2b90d-99d4-477b-ab31-a318ca39da4f-kube-api-access-4bz8c\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.121849 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-catalog-content\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.121903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-utilities\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.122476 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-catalog-content\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.122629 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-utilities\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.147827 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bz8c\" (UniqueName: \"kubernetes.io/projected/30c2b90d-99d4-477b-ab31-a318ca39da4f-kube-api-access-4bz8c\") pod \"community-operators-whr78\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.240143 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.685341 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:18 crc kubenswrapper[4787]: I0126 18:10:18.716285 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whr78"] Jan 26 18:10:19 crc kubenswrapper[4787]: I0126 18:10:19.650579 4787 generic.go:334] "Generic (PLEG): container finished" podID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerID="71f833f43e11f985dc0a317530d829547fe08e12314bdecf29db13d03ad42d3f" exitCode=0 Jan 26 18:10:19 crc kubenswrapper[4787]: I0126 18:10:19.650683 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerDied","Data":"71f833f43e11f985dc0a317530d829547fe08e12314bdecf29db13d03ad42d3f"} Jan 26 18:10:19 crc kubenswrapper[4787]: I0126 18:10:19.651001 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerStarted","Data":"a12c0c607fdb74718cfd895ca1b28e62059afb38444e6cfd13bd3ec8516f68be"} Jan 26 18:10:20 crc kubenswrapper[4787]: I0126 18:10:20.313744 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h42"] Jan 26 18:10:20 crc kubenswrapper[4787]: I0126 18:10:20.658451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerStarted","Data":"c925c9b685053494775cbfa71803529c29414097efc412975e613abb056ca78d"} Jan 26 18:10:21 crc kubenswrapper[4787]: I0126 18:10:21.670063 4787 generic.go:334] "Generic (PLEG): container finished" podID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerID="c925c9b685053494775cbfa71803529c29414097efc412975e613abb056ca78d" exitCode=0 Jan 26 18:10:21 crc kubenswrapper[4787]: I0126 18:10:21.670187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerDied","Data":"c925c9b685053494775cbfa71803529c29414097efc412975e613abb056ca78d"} Jan 26 18:10:21 crc kubenswrapper[4787]: I0126 18:10:21.670415 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v7h42" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="registry-server" containerID="cri-o://9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8" gracePeriod=2 Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.587568 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.636747 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-utilities\") pod \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.636864 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvmr\" (UniqueName: \"kubernetes.io/projected/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-kube-api-access-cnvmr\") pod \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.636922 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-catalog-content\") pod \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\" (UID: \"ce3cb9f5-6e78-4dc5-afc6-f742217272b9\") " Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.639547 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-utilities" (OuterVolumeSpecName: "utilities") pod "ce3cb9f5-6e78-4dc5-afc6-f742217272b9" (UID: "ce3cb9f5-6e78-4dc5-afc6-f742217272b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.644452 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-kube-api-access-cnvmr" (OuterVolumeSpecName: "kube-api-access-cnvmr") pod "ce3cb9f5-6e78-4dc5-afc6-f742217272b9" (UID: "ce3cb9f5-6e78-4dc5-afc6-f742217272b9"). InnerVolumeSpecName "kube-api-access-cnvmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.661043 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce3cb9f5-6e78-4dc5-afc6-f742217272b9" (UID: "ce3cb9f5-6e78-4dc5-afc6-f742217272b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.680445 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h42" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.680901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h42" event={"ID":"ce3cb9f5-6e78-4dc5-afc6-f742217272b9","Type":"ContainerDied","Data":"9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8"} Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.681091 4787 scope.go:117] "RemoveContainer" containerID="9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.681228 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerID="9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8" exitCode=0 Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.681334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h42" event={"ID":"ce3cb9f5-6e78-4dc5-afc6-f742217272b9","Type":"ContainerDied","Data":"b07eebdc99b143b46658ac83661b74371a391cd6e413bded9c49b2670be63f60"} Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.685828 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerStarted","Data":"d0a545ffbccbf95bc51c02e3822029f6f6f9d3302b4f8b634a241e5fd54ef1c4"} Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.707837 4787 scope.go:117] "RemoveContainer" containerID="4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.708438 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whr78" podStartSLOduration=3.163171519 podStartE2EDuration="5.708397031s" podCreationTimestamp="2026-01-26 18:10:17 +0000 UTC" firstStartedPulling="2026-01-26 18:10:19.654455939 +0000 UTC m=+1588.361592082" lastFinishedPulling="2026-01-26 18:10:22.199681461 +0000 UTC m=+1590.906817594" observedRunningTime="2026-01-26 18:10:22.700993992 +0000 UTC m=+1591.408130125" watchObservedRunningTime="2026-01-26 18:10:22.708397031 +0000 UTC m=+1591.415533164" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.721011 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h42"] Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.727175 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h42"] Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.737596 4787 scope.go:117] "RemoveContainer" containerID="5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.739466 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.739489 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvmr\" (UniqueName: \"kubernetes.io/projected/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-kube-api-access-cnvmr\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.739499 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3cb9f5-6e78-4dc5-afc6-f742217272b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.754140 4787 scope.go:117] "RemoveContainer" containerID="9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8" Jan 26 18:10:22 crc kubenswrapper[4787]: E0126 18:10:22.754690 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8\": container with ID starting with 9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8 not found: ID does not exist" containerID="9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.754725 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8"} err="failed to get container status \"9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8\": rpc error: code = NotFound desc = could not find container \"9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8\": container with ID starting with 9fc993d7933dca42f7faa55f7cd19de9777dc5143c5d113cdb44c77a79f3f3a8 not found: ID does not exist" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.754744 4787 scope.go:117] "RemoveContainer" containerID="4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984" Jan 26 18:10:22 crc kubenswrapper[4787]: E0126 18:10:22.755033 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984\": container with ID starting with 4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984 not found: ID does not exist" containerID="4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.755055 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984"} err="failed to get container status \"4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984\": rpc error: code = NotFound desc = could not find container \"4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984\": container with ID starting with 4dc99e43e22bfc5311da34ba9720eeefd33557e5786dc527b4bc9f3332eaa984 not found: ID does not exist" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.755081 4787 scope.go:117] "RemoveContainer" containerID="5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7" Jan 26 18:10:22 crc kubenswrapper[4787]: E0126 18:10:22.755519 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7\": container with ID starting with 5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7 not found: ID does not exist" containerID="5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7" Jan 26 18:10:22 crc kubenswrapper[4787]: I0126 18:10:22.755535 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7"} err="failed to get container status \"5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7\": rpc error: code = NotFound desc = could not find container \"5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7\": container with ID starting with 5244f81465d1c7d35bb42bac78736ddfc770e7c8a1ec5a970683fdb757e345c7 not found: ID does not exist" Jan 26 18:10:23 crc kubenswrapper[4787]: I0126 18:10:23.598296 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" path="/var/lib/kubelet/pods/ce3cb9f5-6e78-4dc5-afc6-f742217272b9/volumes" Jan 26 18:10:28 crc kubenswrapper[4787]: I0126 18:10:28.240841 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:28 crc kubenswrapper[4787]: I0126 18:10:28.241367 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:28 crc kubenswrapper[4787]: I0126 18:10:28.303551 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:28 crc kubenswrapper[4787]: I0126 18:10:28.781608 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:28 crc kubenswrapper[4787]: I0126 18:10:28.832396 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whr78"] Jan 26 18:10:30 crc kubenswrapper[4787]: I0126 18:10:30.751570 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-whr78" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="registry-server" containerID="cri-o://d0a545ffbccbf95bc51c02e3822029f6f6f9d3302b4f8b634a241e5fd54ef1c4" gracePeriod=2 Jan 26 18:10:33 crc kubenswrapper[4787]: I0126 18:10:33.783150 4787 generic.go:334] "Generic (PLEG): container finished" podID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerID="d0a545ffbccbf95bc51c02e3822029f6f6f9d3302b4f8b634a241e5fd54ef1c4" exitCode=0 Jan 26 18:10:33 crc kubenswrapper[4787]: I0126 18:10:33.783765 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerDied","Data":"d0a545ffbccbf95bc51c02e3822029f6f6f9d3302b4f8b634a241e5fd54ef1c4"} Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.064009 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.242501 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-utilities\") pod \"30c2b90d-99d4-477b-ab31-a318ca39da4f\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.242563 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-catalog-content\") pod \"30c2b90d-99d4-477b-ab31-a318ca39da4f\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.242604 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bz8c\" (UniqueName: \"kubernetes.io/projected/30c2b90d-99d4-477b-ab31-a318ca39da4f-kube-api-access-4bz8c\") pod \"30c2b90d-99d4-477b-ab31-a318ca39da4f\" (UID: \"30c2b90d-99d4-477b-ab31-a318ca39da4f\") " Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.244011 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-utilities" (OuterVolumeSpecName: "utilities") pod "30c2b90d-99d4-477b-ab31-a318ca39da4f" (UID: "30c2b90d-99d4-477b-ab31-a318ca39da4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.248644 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c2b90d-99d4-477b-ab31-a318ca39da4f-kube-api-access-4bz8c" (OuterVolumeSpecName: "kube-api-access-4bz8c") pod "30c2b90d-99d4-477b-ab31-a318ca39da4f" (UID: "30c2b90d-99d4-477b-ab31-a318ca39da4f"). InnerVolumeSpecName "kube-api-access-4bz8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.296034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30c2b90d-99d4-477b-ab31-a318ca39da4f" (UID: "30c2b90d-99d4-477b-ab31-a318ca39da4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.344136 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.344171 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30c2b90d-99d4-477b-ab31-a318ca39da4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.344187 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bz8c\" (UniqueName: \"kubernetes.io/projected/30c2b90d-99d4-477b-ab31-a318ca39da4f-kube-api-access-4bz8c\") on node \"crc\" DevicePath \"\"" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.799639 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whr78" event={"ID":"30c2b90d-99d4-477b-ab31-a318ca39da4f","Type":"ContainerDied","Data":"a12c0c607fdb74718cfd895ca1b28e62059afb38444e6cfd13bd3ec8516f68be"} Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.799710 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whr78" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.799926 4787 scope.go:117] "RemoveContainer" containerID="d0a545ffbccbf95bc51c02e3822029f6f6f9d3302b4f8b634a241e5fd54ef1c4" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.821038 4787 scope.go:117] "RemoveContainer" containerID="c925c9b685053494775cbfa71803529c29414097efc412975e613abb056ca78d" Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.842934 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whr78"] Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.851963 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-whr78"] Jan 26 18:10:34 crc kubenswrapper[4787]: I0126 18:10:34.870746 4787 scope.go:117] "RemoveContainer" containerID="71f833f43e11f985dc0a317530d829547fe08e12314bdecf29db13d03ad42d3f" Jan 26 18:10:35 crc kubenswrapper[4787]: I0126 18:10:35.597436 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" path="/var/lib/kubelet/pods/30c2b90d-99d4-477b-ab31-a318ca39da4f/volumes" Jan 26 18:10:46 crc kubenswrapper[4787]: I0126 18:10:46.808113 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:10:46 crc kubenswrapper[4787]: I0126 18:10:46.808671 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.099425 4787 scope.go:117] "RemoveContainer" containerID="5c164f905683f615c38814aa87f5da141d7ee3e302815af5ab8d55ab06d27a3b" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.135016 4787 scope.go:117] "RemoveContainer" containerID="bc6ad5472f4b52a21d777483fb774b5bdc0e32251b09e852f7a61df7009015a5" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.156605 4787 scope.go:117] "RemoveContainer" containerID="430d539df16f56714d22efd8e90b3c1890f372416d685fefd30069e93e6e7643" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.179056 4787 scope.go:117] "RemoveContainer" containerID="5d3bd6542ee023476beb041680c535d96acb21ed4b494b28f57144c146974554" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.218857 4787 scope.go:117] "RemoveContainer" containerID="08e360a78719e37fe96e2bfc968619b76835668a51ac0ddd8e89f91dc1196b73" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.239924 4787 scope.go:117] "RemoveContainer" containerID="090e2277c794b4b34d898ea6fc2742fd889317e021791fd07b23fecfee606475" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.284231 4787 scope.go:117] "RemoveContainer" containerID="b1a2c288c6624eaedffb8037540cd342391990cff20c82756f24c24e53f09171" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.313658 4787 scope.go:117] "RemoveContainer" containerID="9f6cb8c7c9c52bc527b6db3c961bea5b572e2bcd4d6880f1237fe1e4e5a8bcf0" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.331539 4787 scope.go:117] "RemoveContainer" containerID="dabc44d45ae4780a822ff11965f740a88ac60a4cb63ea6c15426d7cfbc4851b5" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.349587 4787 scope.go:117] "RemoveContainer" containerID="fecca0cc548e59b341664307e9f68924ab9a33c465b0b433903f91d827876469" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.363446 4787 scope.go:117] "RemoveContainer" containerID="bcd0be3eeb093ce43ce9786f0e22909ddde3933b41c4d54a2c494578c8e688b3" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.381204 4787 scope.go:117] "RemoveContainer" containerID="94dee20b5cbdda7c42538297c47887fe1a7cfe76a2625674d2e197527c1747f1" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.397788 4787 scope.go:117] "RemoveContainer" containerID="c2a537cb6f61c3da89db2a989c5e41821717495c5cd63da19464b828f6e8ec9c" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.412330 4787 scope.go:117] "RemoveContainer" containerID="cb66dd748177ebf9c4c17c1f841026be3750c54c16f6cd4b8dca41c495275e49" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.428088 4787 scope.go:117] "RemoveContainer" containerID="087dd7ca5bbeddf0bb9167f42e29d72ee8a272d4bb2fa7dd871e63f2482d1b6c" Jan 26 18:10:56 crc kubenswrapper[4787]: I0126 18:10:56.445882 4787 scope.go:117] "RemoveContainer" containerID="bbdd4c25ae774f53e0764a108142f958cb8108c512e8edfe05fab546f2783f62" Jan 26 18:11:16 crc kubenswrapper[4787]: I0126 18:11:16.807693 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:11:16 crc kubenswrapper[4787]: I0126 18:11:16.808278 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:11:46 crc kubenswrapper[4787]: I0126 18:11:46.807660 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:11:46 crc kubenswrapper[4787]: I0126 18:11:46.807997 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:11:46 crc kubenswrapper[4787]: I0126 18:11:46.808059 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:11:46 crc kubenswrapper[4787]: I0126 18:11:46.808602 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:11:46 crc kubenswrapper[4787]: I0126 18:11:46.808652 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" gracePeriod=600 Jan 26 18:11:47 crc kubenswrapper[4787]: I0126 18:11:47.327991 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" exitCode=0 Jan 26 18:11:47 crc kubenswrapper[4787]: I0126 18:11:47.328043 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010"} Jan 26 18:11:47 crc kubenswrapper[4787]: I0126 18:11:47.328081 4787 scope.go:117] "RemoveContainer" containerID="113c45a679e7cee59810e8e7b032bbf95c23e0a2fbc209f74960d3bd68199f7f" Jan 26 18:11:47 crc kubenswrapper[4787]: E0126 18:11:47.436874 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:11:48 crc kubenswrapper[4787]: I0126 18:11:48.338193 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:11:48 crc kubenswrapper[4787]: E0126 18:11:48.340122 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:11:56 crc kubenswrapper[4787]: I0126 18:11:56.648250 4787 scope.go:117] "RemoveContainer" containerID="43961d5d06f3ba0dea16f2a4bbb78bb16b8ea8ace60a31feebf82fac2516b093" Jan 26 18:11:56 crc kubenswrapper[4787]: I0126 18:11:56.693741 4787 scope.go:117] "RemoveContainer" containerID="725267d2e5c128154a56f6d56c73937e88fdae0aa20aeb6e22a66b6377696a3c" Jan 26 18:11:56 crc kubenswrapper[4787]: I0126 18:11:56.748968 4787 scope.go:117] "RemoveContainer" containerID="65139bd59ec594a52aca13930dabadeabe11be030f710dd6f822faa0baa01ffa" Jan 26 18:11:58 crc kubenswrapper[4787]: I0126 18:11:58.589119 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:11:58 crc kubenswrapper[4787]: E0126 18:11:58.589660 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:12:09 crc kubenswrapper[4787]: I0126 18:12:09.589170 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:12:09 crc kubenswrapper[4787]: E0126 18:12:09.589986 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:12:21 crc kubenswrapper[4787]: I0126 18:12:21.593182 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:12:21 crc kubenswrapper[4787]: E0126 18:12:21.594478 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:12:34 crc kubenswrapper[4787]: I0126 18:12:34.590211 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:12:34 crc kubenswrapper[4787]: E0126 18:12:34.591017 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:12:49 crc kubenswrapper[4787]: I0126 18:12:49.596153 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:12:49 crc kubenswrapper[4787]: E0126 18:12:49.596902 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:12:56 crc kubenswrapper[4787]: I0126 18:12:56.802365 4787 scope.go:117] "RemoveContainer" containerID="bce14f5c3336833f2b3c2ae212387de58d934852b0d780492cdb05b577e65e4e" Jan 26 18:12:56 crc kubenswrapper[4787]: I0126 18:12:56.826343 4787 scope.go:117] "RemoveContainer" containerID="832830a90028608c790b768d0aff195477684857f28e7be56072174f5075f085" Jan 26 18:13:01 crc kubenswrapper[4787]: I0126 18:13:01.597842 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:13:01 crc kubenswrapper[4787]: E0126 18:13:01.600433 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:13:14 crc kubenswrapper[4787]: I0126 18:13:14.589833 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:13:14 crc kubenswrapper[4787]: E0126 18:13:14.590687 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:13:27 crc kubenswrapper[4787]: I0126 18:13:27.590657 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:13:27 crc kubenswrapper[4787]: E0126 18:13:27.591291 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:13:42 crc kubenswrapper[4787]: I0126 18:13:42.588796 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:13:42 crc kubenswrapper[4787]: E0126 18:13:42.590231 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:13:53 crc kubenswrapper[4787]: I0126 18:13:53.589711 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:13:53 crc kubenswrapper[4787]: E0126 18:13:53.590559 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:14:06 crc kubenswrapper[4787]: I0126 18:14:06.589606 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:14:06 crc kubenswrapper[4787]: E0126 18:14:06.590189 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:14:20 crc kubenswrapper[4787]: I0126 18:14:20.590168 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:14:20 crc kubenswrapper[4787]: E0126 18:14:20.590958 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:14:32 crc kubenswrapper[4787]: I0126 18:14:32.589218 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:14:32 crc kubenswrapper[4787]: E0126 18:14:32.591642 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:14:43 crc kubenswrapper[4787]: I0126 18:14:43.590219 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:14:43 crc kubenswrapper[4787]: E0126 18:14:43.591074 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:14:57 crc kubenswrapper[4787]: I0126 18:14:57.589111 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:14:57 crc kubenswrapper[4787]: E0126 18:14:57.589818 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148082 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6"] Jan 26 18:15:00 crc kubenswrapper[4787]: E0126 18:15:00.148742 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="extract-utilities" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148765 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="extract-utilities" Jan 26 18:15:00 crc kubenswrapper[4787]: E0126 18:15:00.148792 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="extract-content" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148804 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="extract-content" Jan 26 18:15:00 crc kubenswrapper[4787]: E0126 18:15:00.148821 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148831 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4787]: E0126 18:15:00.148859 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="extract-content" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148872 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="extract-content" Jan 26 18:15:00 crc kubenswrapper[4787]: E0126 18:15:00.148888 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148898 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4787]: E0126 18:15:00.148921 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="extract-utilities" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.148932 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="extract-utilities" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.149179 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c2b90d-99d4-477b-ab31-a318ca39da4f" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.149217 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3cb9f5-6e78-4dc5-afc6-f742217272b9" containerName="registry-server" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.149902 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.155095 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6"] Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.155106 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.155412 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.248528 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14002b44-d9b6-447e-9c7d-2cad3f54515a-config-volume\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.248660 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14002b44-d9b6-447e-9c7d-2cad3f54515a-secret-volume\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.248751 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zfx\" (UniqueName: \"kubernetes.io/projected/14002b44-d9b6-447e-9c7d-2cad3f54515a-kube-api-access-g7zfx\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.349798 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7zfx\" (UniqueName: \"kubernetes.io/projected/14002b44-d9b6-447e-9c7d-2cad3f54515a-kube-api-access-g7zfx\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.349887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14002b44-d9b6-447e-9c7d-2cad3f54515a-config-volume\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.349970 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14002b44-d9b6-447e-9c7d-2cad3f54515a-secret-volume\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.350903 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14002b44-d9b6-447e-9c7d-2cad3f54515a-config-volume\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.355870 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14002b44-d9b6-447e-9c7d-2cad3f54515a-secret-volume\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.367835 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7zfx\" (UniqueName: \"kubernetes.io/projected/14002b44-d9b6-447e-9c7d-2cad3f54515a-kube-api-access-g7zfx\") pod \"collect-profiles-29490855-4g7c6\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.471634 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:00 crc kubenswrapper[4787]: I0126 18:15:00.944354 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6"] Jan 26 18:15:01 crc kubenswrapper[4787]: I0126 18:15:01.955660 4787 generic.go:334] "Generic (PLEG): container finished" podID="14002b44-d9b6-447e-9c7d-2cad3f54515a" containerID="deb1fbf1696f2a237601cf07391e71712b8e3a0377e51a79cc31b4d42f3214ee" exitCode=0 Jan 26 18:15:01 crc kubenswrapper[4787]: I0126 18:15:01.955771 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" event={"ID":"14002b44-d9b6-447e-9c7d-2cad3f54515a","Type":"ContainerDied","Data":"deb1fbf1696f2a237601cf07391e71712b8e3a0377e51a79cc31b4d42f3214ee"} Jan 26 18:15:01 crc kubenswrapper[4787]: I0126 18:15:01.956101 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" event={"ID":"14002b44-d9b6-447e-9c7d-2cad3f54515a","Type":"ContainerStarted","Data":"152e2229f02320a8dbef585c39de2a7f54e73ab1a19a876af1866ce96757e4e2"} Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.230204 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.290230 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14002b44-d9b6-447e-9c7d-2cad3f54515a-secret-volume\") pod \"14002b44-d9b6-447e-9c7d-2cad3f54515a\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.290326 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14002b44-d9b6-447e-9c7d-2cad3f54515a-config-volume\") pod \"14002b44-d9b6-447e-9c7d-2cad3f54515a\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.290414 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7zfx\" (UniqueName: \"kubernetes.io/projected/14002b44-d9b6-447e-9c7d-2cad3f54515a-kube-api-access-g7zfx\") pod \"14002b44-d9b6-447e-9c7d-2cad3f54515a\" (UID: \"14002b44-d9b6-447e-9c7d-2cad3f54515a\") " Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.291402 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14002b44-d9b6-447e-9c7d-2cad3f54515a-config-volume" (OuterVolumeSpecName: "config-volume") pod "14002b44-d9b6-447e-9c7d-2cad3f54515a" (UID: "14002b44-d9b6-447e-9c7d-2cad3f54515a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.295320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14002b44-d9b6-447e-9c7d-2cad3f54515a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "14002b44-d9b6-447e-9c7d-2cad3f54515a" (UID: "14002b44-d9b6-447e-9c7d-2cad3f54515a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.295583 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14002b44-d9b6-447e-9c7d-2cad3f54515a-kube-api-access-g7zfx" (OuterVolumeSpecName: "kube-api-access-g7zfx") pod "14002b44-d9b6-447e-9c7d-2cad3f54515a" (UID: "14002b44-d9b6-447e-9c7d-2cad3f54515a"). InnerVolumeSpecName "kube-api-access-g7zfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.393457 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14002b44-d9b6-447e-9c7d-2cad3f54515a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.393498 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14002b44-d9b6-447e-9c7d-2cad3f54515a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.393511 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7zfx\" (UniqueName: \"kubernetes.io/projected/14002b44-d9b6-447e-9c7d-2cad3f54515a-kube-api-access-g7zfx\") on node \"crc\" DevicePath \"\"" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.976224 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" event={"ID":"14002b44-d9b6-447e-9c7d-2cad3f54515a","Type":"ContainerDied","Data":"152e2229f02320a8dbef585c39de2a7f54e73ab1a19a876af1866ce96757e4e2"} Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.976270 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152e2229f02320a8dbef585c39de2a7f54e73ab1a19a876af1866ce96757e4e2" Jan 26 18:15:03 crc kubenswrapper[4787]: I0126 18:15:03.976279 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6" Jan 26 18:15:09 crc kubenswrapper[4787]: I0126 18:15:09.590583 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:15:09 crc kubenswrapper[4787]: E0126 18:15:09.591490 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:15:22 crc kubenswrapper[4787]: I0126 18:15:22.589444 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:15:22 crc kubenswrapper[4787]: E0126 18:15:22.590344 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:15:34 crc kubenswrapper[4787]: I0126 18:15:34.589556 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:15:34 crc kubenswrapper[4787]: E0126 18:15:34.590491 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:15:45 crc kubenswrapper[4787]: I0126 18:15:45.589331 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:15:45 crc kubenswrapper[4787]: E0126 18:15:45.590299 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:15:57 crc kubenswrapper[4787]: I0126 18:15:57.588772 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:15:57 crc kubenswrapper[4787]: E0126 18:15:57.589610 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.303793 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jrv8l"] Jan 26 18:15:59 crc kubenswrapper[4787]: E0126 18:15:59.304455 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14002b44-d9b6-447e-9c7d-2cad3f54515a" containerName="collect-profiles" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.304470 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="14002b44-d9b6-447e-9c7d-2cad3f54515a" containerName="collect-profiles" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.304625 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="14002b44-d9b6-447e-9c7d-2cad3f54515a" containerName="collect-profiles" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.305748 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.306586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmp9\" (UniqueName: \"kubernetes.io/projected/2c40e910-ead1-417d-8d76-8123c0b5dc58-kube-api-access-mmmp9\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.306658 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-catalog-content\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.306879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-utilities\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.321998 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrv8l"] Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.408151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmp9\" (UniqueName: \"kubernetes.io/projected/2c40e910-ead1-417d-8d76-8123c0b5dc58-kube-api-access-mmmp9\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.408191 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-catalog-content\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.408239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-utilities\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.408692 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-utilities\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.408812 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-catalog-content\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.427987 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmp9\" (UniqueName: \"kubernetes.io/projected/2c40e910-ead1-417d-8d76-8123c0b5dc58-kube-api-access-mmmp9\") pod \"certified-operators-jrv8l\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.625186 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:15:59 crc kubenswrapper[4787]: I0126 18:15:59.901916 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrv8l"] Jan 26 18:16:00 crc kubenswrapper[4787]: I0126 18:16:00.448765 4787 generic.go:334] "Generic (PLEG): container finished" podID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerID="cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0" exitCode=0 Jan 26 18:16:00 crc kubenswrapper[4787]: I0126 18:16:00.449076 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerDied","Data":"cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0"} Jan 26 18:16:00 crc kubenswrapper[4787]: I0126 18:16:00.449280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerStarted","Data":"d2854a0fbc5002859be7e3145678082924fd83208b187a5bdee9cc2b695af5c5"} Jan 26 18:16:00 crc kubenswrapper[4787]: I0126 18:16:00.452515 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:16:01 crc kubenswrapper[4787]: I0126 18:16:01.460384 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerStarted","Data":"750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a"} Jan 26 18:16:02 crc kubenswrapper[4787]: I0126 18:16:02.467495 4787 generic.go:334] "Generic (PLEG): container finished" podID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerID="750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a" exitCode=0 Jan 26 18:16:02 crc kubenswrapper[4787]: I0126 18:16:02.467533 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerDied","Data":"750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a"} Jan 26 18:16:03 crc kubenswrapper[4787]: I0126 18:16:03.474982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerStarted","Data":"c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c"} Jan 26 18:16:03 crc kubenswrapper[4787]: I0126 18:16:03.497593 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jrv8l" podStartSLOduration=2.030006872 podStartE2EDuration="4.497573982s" podCreationTimestamp="2026-01-26 18:15:59 +0000 UTC" firstStartedPulling="2026-01-26 18:16:00.452123144 +0000 UTC m=+1929.159259317" lastFinishedPulling="2026-01-26 18:16:02.919690274 +0000 UTC m=+1931.626826427" observedRunningTime="2026-01-26 18:16:03.489876782 +0000 UTC m=+1932.197012915" watchObservedRunningTime="2026-01-26 18:16:03.497573982 +0000 UTC m=+1932.204710115" Jan 26 18:16:09 crc kubenswrapper[4787]: I0126 18:16:09.625979 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:16:09 crc kubenswrapper[4787]: I0126 18:16:09.627696 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:16:09 crc kubenswrapper[4787]: I0126 18:16:09.698653 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:16:10 crc kubenswrapper[4787]: I0126 18:16:10.565404 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:16:12 crc kubenswrapper[4787]: I0126 18:16:12.589130 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:16:12 crc kubenswrapper[4787]: E0126 18:16:12.589332 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:16:12 crc kubenswrapper[4787]: I0126 18:16:12.690823 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrv8l"] Jan 26 18:16:13 crc kubenswrapper[4787]: I0126 18:16:13.545121 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jrv8l" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="registry-server" containerID="cri-o://c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c" gracePeriod=2 Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.098879 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.144313 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-utilities\") pod \"2c40e910-ead1-417d-8d76-8123c0b5dc58\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.144374 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmmp9\" (UniqueName: \"kubernetes.io/projected/2c40e910-ead1-417d-8d76-8123c0b5dc58-kube-api-access-mmmp9\") pod \"2c40e910-ead1-417d-8d76-8123c0b5dc58\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.144493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-catalog-content\") pod \"2c40e910-ead1-417d-8d76-8123c0b5dc58\" (UID: \"2c40e910-ead1-417d-8d76-8123c0b5dc58\") " Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.145475 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-utilities" (OuterVolumeSpecName: "utilities") pod "2c40e910-ead1-417d-8d76-8123c0b5dc58" (UID: "2c40e910-ead1-417d-8d76-8123c0b5dc58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.149984 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c40e910-ead1-417d-8d76-8123c0b5dc58-kube-api-access-mmmp9" (OuterVolumeSpecName: "kube-api-access-mmmp9") pod "2c40e910-ead1-417d-8d76-8123c0b5dc58" (UID: "2c40e910-ead1-417d-8d76-8123c0b5dc58"). InnerVolumeSpecName "kube-api-access-mmmp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.196477 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c40e910-ead1-417d-8d76-8123c0b5dc58" (UID: "2c40e910-ead1-417d-8d76-8123c0b5dc58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.246035 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.246086 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmmp9\" (UniqueName: \"kubernetes.io/projected/2c40e910-ead1-417d-8d76-8123c0b5dc58-kube-api-access-mmmp9\") on node \"crc\" DevicePath \"\"" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.246098 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c40e910-ead1-417d-8d76-8123c0b5dc58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.570194 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrv8l" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.570196 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerDied","Data":"c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c"} Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.570445 4787 scope.go:117] "RemoveContainer" containerID="c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.570038 4787 generic.go:334] "Generic (PLEG): container finished" podID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerID="c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c" exitCode=0 Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.575203 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrv8l" event={"ID":"2c40e910-ead1-417d-8d76-8123c0b5dc58","Type":"ContainerDied","Data":"d2854a0fbc5002859be7e3145678082924fd83208b187a5bdee9cc2b695af5c5"} Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.594765 4787 scope.go:117] "RemoveContainer" containerID="750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.619371 4787 scope.go:117] "RemoveContainer" containerID="cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.628191 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrv8l"] Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.639283 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jrv8l"] Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.650172 4787 scope.go:117] "RemoveContainer" containerID="c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c" Jan 26 18:16:15 crc kubenswrapper[4787]: E0126 18:16:15.651482 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c\": container with ID starting with c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c not found: ID does not exist" containerID="c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.651591 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c"} err="failed to get container status \"c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c\": rpc error: code = NotFound desc = could not find container \"c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c\": container with ID starting with c1fce11058e4197a5377a7b041c1d94c12661a461e050504776aa900a9787f9c not found: ID does not exist" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.651695 4787 scope.go:117] "RemoveContainer" containerID="750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a" Jan 26 18:16:15 crc kubenswrapper[4787]: E0126 18:16:15.652040 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a\": container with ID starting with 750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a not found: ID does not exist" containerID="750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.652130 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a"} err="failed to get container status \"750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a\": rpc error: code = NotFound desc = could not find container \"750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a\": container with ID starting with 750cd23a82065ff170f031dfc4423a4ddef97cc7f7e68107aaf8b02722bb762a not found: ID does not exist" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.652218 4787 scope.go:117] "RemoveContainer" containerID="cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0" Jan 26 18:16:15 crc kubenswrapper[4787]: E0126 18:16:15.652942 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0\": container with ID starting with cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0 not found: ID does not exist" containerID="cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0" Jan 26 18:16:15 crc kubenswrapper[4787]: I0126 18:16:15.653046 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0"} err="failed to get container status \"cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0\": rpc error: code = NotFound desc = could not find container \"cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0\": container with ID starting with cad3af09e39291347a08c7812372841f9b1ad3cf6f904697214c166eadfa2cc0 not found: ID does not exist" Jan 26 18:16:17 crc kubenswrapper[4787]: I0126 18:16:17.598009 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" path="/var/lib/kubelet/pods/2c40e910-ead1-417d-8d76-8123c0b5dc58/volumes" Jan 26 18:16:25 crc kubenswrapper[4787]: I0126 18:16:25.592519 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:16:25 crc kubenswrapper[4787]: E0126 18:16:25.593548 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:16:37 crc kubenswrapper[4787]: I0126 18:16:37.589091 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:16:37 crc kubenswrapper[4787]: E0126 18:16:37.589779 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:16:50 crc kubenswrapper[4787]: I0126 18:16:50.589859 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:16:50 crc kubenswrapper[4787]: I0126 18:16:50.808567 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"f6d2015aa2c3c7acc80936469a750581cf2f3a43015de10aec9c940956c0be08"} Jan 26 18:19:16 crc kubenswrapper[4787]: I0126 18:19:16.809139 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:19:16 crc kubenswrapper[4787]: I0126 18:19:16.810159 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:19:46 crc kubenswrapper[4787]: I0126 18:19:46.808079 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:19:46 crc kubenswrapper[4787]: I0126 18:19:46.808924 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:20:16 crc kubenswrapper[4787]: I0126 18:20:16.807869 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:20:16 crc kubenswrapper[4787]: I0126 18:20:16.810933 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:20:16 crc kubenswrapper[4787]: I0126 18:20:16.811030 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:20:16 crc kubenswrapper[4787]: I0126 18:20:16.811805 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6d2015aa2c3c7acc80936469a750581cf2f3a43015de10aec9c940956c0be08"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:20:16 crc kubenswrapper[4787]: I0126 18:20:16.811866 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://f6d2015aa2c3c7acc80936469a750581cf2f3a43015de10aec9c940956c0be08" gracePeriod=600 Jan 26 18:20:18 crc kubenswrapper[4787]: I0126 18:20:18.434244 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="f6d2015aa2c3c7acc80936469a750581cf2f3a43015de10aec9c940956c0be08" exitCode=0 Jan 26 18:20:18 crc kubenswrapper[4787]: I0126 18:20:18.434522 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"f6d2015aa2c3c7acc80936469a750581cf2f3a43015de10aec9c940956c0be08"} Jan 26 18:20:18 crc kubenswrapper[4787]: I0126 18:20:18.434772 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0"} Jan 26 18:20:18 crc kubenswrapper[4787]: I0126 18:20:18.434794 4787 scope.go:117] "RemoveContainer" containerID="b07d136d0184b67e9dc5aa77d6266f92716beb42d01e2f5e02b2c16214298010" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.429745 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h98jj"] Jan 26 18:20:20 crc kubenswrapper[4787]: E0126 18:20:20.430734 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="extract-utilities" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.430752 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="extract-utilities" Jan 26 18:20:20 crc kubenswrapper[4787]: E0126 18:20:20.430766 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="extract-content" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.430775 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="extract-content" Jan 26 18:20:20 crc kubenswrapper[4787]: E0126 18:20:20.430799 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="registry-server" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.430808 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="registry-server" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.431025 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c40e910-ead1-417d-8d76-8123c0b5dc58" containerName="registry-server" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.432293 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.440909 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h98jj"] Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.514389 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwbf4\" (UniqueName: \"kubernetes.io/projected/688b77ab-21b7-4ac7-866b-4f30b672da5f-kube-api-access-fwbf4\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.514448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-utilities\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.514508 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-catalog-content\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.615783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwbf4\" (UniqueName: \"kubernetes.io/projected/688b77ab-21b7-4ac7-866b-4f30b672da5f-kube-api-access-fwbf4\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.615844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-utilities\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.615901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-catalog-content\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.616586 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-utilities\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.616606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-catalog-content\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.636178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwbf4\" (UniqueName: \"kubernetes.io/projected/688b77ab-21b7-4ac7-866b-4f30b672da5f-kube-api-access-fwbf4\") pod \"community-operators-h98jj\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:20 crc kubenswrapper[4787]: I0126 18:20:20.751691 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:21 crc kubenswrapper[4787]: I0126 18:20:21.239832 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h98jj"] Jan 26 18:20:21 crc kubenswrapper[4787]: W0126 18:20:21.250205 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688b77ab_21b7_4ac7_866b_4f30b672da5f.slice/crio-abc69cabb962e875dc2cc9c7a29f0668bbab814ec85cdcdfb3bcfd3331d05477 WatchSource:0}: Error finding container abc69cabb962e875dc2cc9c7a29f0668bbab814ec85cdcdfb3bcfd3331d05477: Status 404 returned error can't find the container with id abc69cabb962e875dc2cc9c7a29f0668bbab814ec85cdcdfb3bcfd3331d05477 Jan 26 18:20:21 crc kubenswrapper[4787]: I0126 18:20:21.461560 4787 generic.go:334] "Generic (PLEG): container finished" podID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerID="8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b" exitCode=0 Jan 26 18:20:21 crc kubenswrapper[4787]: I0126 18:20:21.461597 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerDied","Data":"8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b"} Jan 26 18:20:21 crc kubenswrapper[4787]: I0126 18:20:21.461622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerStarted","Data":"abc69cabb962e875dc2cc9c7a29f0668bbab814ec85cdcdfb3bcfd3331d05477"} Jan 26 18:20:22 crc kubenswrapper[4787]: I0126 18:20:22.471053 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerStarted","Data":"4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db"} Jan 26 18:20:23 crc kubenswrapper[4787]: I0126 18:20:23.498095 4787 generic.go:334] "Generic (PLEG): container finished" podID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerID="4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db" exitCode=0 Jan 26 18:20:23 crc kubenswrapper[4787]: I0126 18:20:23.498097 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerDied","Data":"4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db"} Jan 26 18:20:24 crc kubenswrapper[4787]: I0126 18:20:24.505878 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerStarted","Data":"8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f"} Jan 26 18:20:24 crc kubenswrapper[4787]: I0126 18:20:24.526601 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h98jj" podStartSLOduration=2.07892111 podStartE2EDuration="4.526583693s" podCreationTimestamp="2026-01-26 18:20:20 +0000 UTC" firstStartedPulling="2026-01-26 18:20:21.464444732 +0000 UTC m=+2190.171580885" lastFinishedPulling="2026-01-26 18:20:23.912107295 +0000 UTC m=+2192.619243468" observedRunningTime="2026-01-26 18:20:24.522157755 +0000 UTC m=+2193.229293898" watchObservedRunningTime="2026-01-26 18:20:24.526583693 +0000 UTC m=+2193.233719826" Jan 26 18:20:30 crc kubenswrapper[4787]: I0126 18:20:30.752134 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:30 crc kubenswrapper[4787]: I0126 18:20:30.752760 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:30 crc kubenswrapper[4787]: I0126 18:20:30.817163 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:31 crc kubenswrapper[4787]: I0126 18:20:31.628830 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:31 crc kubenswrapper[4787]: I0126 18:20:31.685503 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h98jj"] Jan 26 18:20:33 crc kubenswrapper[4787]: I0126 18:20:33.579822 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h98jj" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="registry-server" containerID="cri-o://8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f" gracePeriod=2 Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.500853 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.587921 4787 generic.go:334] "Generic (PLEG): container finished" podID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerID="8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f" exitCode=0 Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.587977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerDied","Data":"8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f"} Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.588001 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h98jj" event={"ID":"688b77ab-21b7-4ac7-866b-4f30b672da5f","Type":"ContainerDied","Data":"abc69cabb962e875dc2cc9c7a29f0668bbab814ec85cdcdfb3bcfd3331d05477"} Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.588020 4787 scope.go:117] "RemoveContainer" containerID="8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.588158 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h98jj" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.607599 4787 scope.go:117] "RemoveContainer" containerID="4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.618693 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-utilities\") pod \"688b77ab-21b7-4ac7-866b-4f30b672da5f\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.618756 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwbf4\" (UniqueName: \"kubernetes.io/projected/688b77ab-21b7-4ac7-866b-4f30b672da5f-kube-api-access-fwbf4\") pod \"688b77ab-21b7-4ac7-866b-4f30b672da5f\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.618779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-catalog-content\") pod \"688b77ab-21b7-4ac7-866b-4f30b672da5f\" (UID: \"688b77ab-21b7-4ac7-866b-4f30b672da5f\") " Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.620003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-utilities" (OuterVolumeSpecName: "utilities") pod "688b77ab-21b7-4ac7-866b-4f30b672da5f" (UID: "688b77ab-21b7-4ac7-866b-4f30b672da5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.622624 4787 scope.go:117] "RemoveContainer" containerID="8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.625164 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688b77ab-21b7-4ac7-866b-4f30b672da5f-kube-api-access-fwbf4" (OuterVolumeSpecName: "kube-api-access-fwbf4") pod "688b77ab-21b7-4ac7-866b-4f30b672da5f" (UID: "688b77ab-21b7-4ac7-866b-4f30b672da5f"). InnerVolumeSpecName "kube-api-access-fwbf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.662060 4787 scope.go:117] "RemoveContainer" containerID="8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f" Jan 26 18:20:34 crc kubenswrapper[4787]: E0126 18:20:34.662894 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f\": container with ID starting with 8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f not found: ID does not exist" containerID="8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.662935 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f"} err="failed to get container status \"8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f\": rpc error: code = NotFound desc = could not find container \"8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f\": container with ID starting with 8f5338d0ae192c0cc615a9394f7f28804b1599b3848b5c1934cf4ad991a4ab9f not found: ID does not exist" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.662981 4787 scope.go:117] "RemoveContainer" containerID="4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db" Jan 26 18:20:34 crc kubenswrapper[4787]: E0126 18:20:34.663468 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db\": container with ID starting with 4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db not found: ID does not exist" containerID="4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.663519 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db"} err="failed to get container status \"4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db\": rpc error: code = NotFound desc = could not find container \"4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db\": container with ID starting with 4d877291d7ba4ee065e10972294f32ff32009e81668542ffafcecaec5a3a16db not found: ID does not exist" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.663553 4787 scope.go:117] "RemoveContainer" containerID="8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b" Jan 26 18:20:34 crc kubenswrapper[4787]: E0126 18:20:34.663937 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b\": container with ID starting with 8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b not found: ID does not exist" containerID="8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.663986 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b"} err="failed to get container status \"8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b\": rpc error: code = NotFound desc = could not find container \"8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b\": container with ID starting with 8516d57a7b903cfd7c2f7a5e901310f96b57a64841ac30b42d7ecf6d245f6a7b not found: ID does not exist" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.682138 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "688b77ab-21b7-4ac7-866b-4f30b672da5f" (UID: "688b77ab-21b7-4ac7-866b-4f30b672da5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.721620 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.721643 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwbf4\" (UniqueName: \"kubernetes.io/projected/688b77ab-21b7-4ac7-866b-4f30b672da5f-kube-api-access-fwbf4\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.721652 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688b77ab-21b7-4ac7-866b-4f30b672da5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.922673 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h98jj"] Jan 26 18:20:34 crc kubenswrapper[4787]: I0126 18:20:34.930175 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h98jj"] Jan 26 18:20:35 crc kubenswrapper[4787]: I0126 18:20:35.596674 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" path="/var/lib/kubelet/pods/688b77ab-21b7-4ac7-866b-4f30b672da5f/volumes" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.809865 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm7x"] Jan 26 18:21:22 crc kubenswrapper[4787]: E0126 18:21:22.810741 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="extract-utilities" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.810758 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="extract-utilities" Jan 26 18:21:22 crc kubenswrapper[4787]: E0126 18:21:22.810773 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="extract-content" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.810782 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="extract-content" Jan 26 18:21:22 crc kubenswrapper[4787]: E0126 18:21:22.810807 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="registry-server" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.810815 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="registry-server" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.811027 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="688b77ab-21b7-4ac7-866b-4f30b672da5f" containerName="registry-server" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.812271 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.830298 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm7x"] Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.908593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-utilities\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.908673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrgf\" (UniqueName: \"kubernetes.io/projected/8f7ca2e0-a566-472b-abdb-73f6cb45f136-kube-api-access-lnrgf\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:22 crc kubenswrapper[4787]: I0126 18:21:22.908730 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-catalog-content\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.010601 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-catalog-content\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.010741 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-utilities\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.010778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrgf\" (UniqueName: \"kubernetes.io/projected/8f7ca2e0-a566-472b-abdb-73f6cb45f136-kube-api-access-lnrgf\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.011307 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-utilities\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.011335 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-catalog-content\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.030064 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrgf\" (UniqueName: \"kubernetes.io/projected/8f7ca2e0-a566-472b-abdb-73f6cb45f136-kube-api-access-lnrgf\") pod \"redhat-marketplace-jzm7x\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.141581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.578644 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm7x"] Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.965194 4787 generic.go:334] "Generic (PLEG): container finished" podID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerID="910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76" exitCode=0 Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.965289 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm7x" event={"ID":"8f7ca2e0-a566-472b-abdb-73f6cb45f136","Type":"ContainerDied","Data":"910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76"} Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.965495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm7x" event={"ID":"8f7ca2e0-a566-472b-abdb-73f6cb45f136","Type":"ContainerStarted","Data":"edfb5be2b468eb1e680b626bd1f40f06a882069bddbb52017bcd17809f0b6431"} Jan 26 18:21:23 crc kubenswrapper[4787]: I0126 18:21:23.967811 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:21:25 crc kubenswrapper[4787]: I0126 18:21:25.979062 4787 generic.go:334] "Generic (PLEG): container finished" podID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerID="e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600" exitCode=0 Jan 26 18:21:25 crc kubenswrapper[4787]: I0126 18:21:25.979151 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm7x" event={"ID":"8f7ca2e0-a566-472b-abdb-73f6cb45f136","Type":"ContainerDied","Data":"e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600"} Jan 26 18:21:27 crc kubenswrapper[4787]: I0126 18:21:27.995802 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm7x" event={"ID":"8f7ca2e0-a566-472b-abdb-73f6cb45f136","Type":"ContainerStarted","Data":"60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1"} Jan 26 18:21:28 crc kubenswrapper[4787]: I0126 18:21:28.026373 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jzm7x" podStartSLOduration=2.386897378 podStartE2EDuration="6.026354436s" podCreationTimestamp="2026-01-26 18:21:22 +0000 UTC" firstStartedPulling="2026-01-26 18:21:23.967556004 +0000 UTC m=+2252.674692137" lastFinishedPulling="2026-01-26 18:21:27.607013062 +0000 UTC m=+2256.314149195" observedRunningTime="2026-01-26 18:21:28.021022746 +0000 UTC m=+2256.728158899" watchObservedRunningTime="2026-01-26 18:21:28.026354436 +0000 UTC m=+2256.733490569" Jan 26 18:21:33 crc kubenswrapper[4787]: I0126 18:21:33.142506 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:33 crc kubenswrapper[4787]: I0126 18:21:33.143101 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:33 crc kubenswrapper[4787]: I0126 18:21:33.191408 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:34 crc kubenswrapper[4787]: I0126 18:21:34.131084 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:34 crc kubenswrapper[4787]: I0126 18:21:34.172796 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm7x"] Jan 26 18:21:36 crc kubenswrapper[4787]: I0126 18:21:36.050841 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jzm7x" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="registry-server" containerID="cri-o://60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1" gracePeriod=2 Jan 26 18:21:36 crc kubenswrapper[4787]: I0126 18:21:36.968997 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.060885 4787 generic.go:334] "Generic (PLEG): container finished" podID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerID="60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1" exitCode=0 Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.060923 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm7x" event={"ID":"8f7ca2e0-a566-472b-abdb-73f6cb45f136","Type":"ContainerDied","Data":"60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1"} Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.060982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm7x" event={"ID":"8f7ca2e0-a566-472b-abdb-73f6cb45f136","Type":"ContainerDied","Data":"edfb5be2b468eb1e680b626bd1f40f06a882069bddbb52017bcd17809f0b6431"} Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.061001 4787 scope.go:117] "RemoveContainer" containerID="60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.060903 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm7x" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.080287 4787 scope.go:117] "RemoveContainer" containerID="e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.096226 4787 scope.go:117] "RemoveContainer" containerID="910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.120640 4787 scope.go:117] "RemoveContainer" containerID="60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1" Jan 26 18:21:37 crc kubenswrapper[4787]: E0126 18:21:37.121075 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1\": container with ID starting with 60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1 not found: ID does not exist" containerID="60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.121130 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1"} err="failed to get container status \"60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1\": rpc error: code = NotFound desc = could not find container \"60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1\": container with ID starting with 60803ec1077e90a62006d79c2cedb7f07ba7f0f2e0c4b32e420cbe09c306fef1 not found: ID does not exist" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.121164 4787 scope.go:117] "RemoveContainer" containerID="e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600" Jan 26 18:21:37 crc kubenswrapper[4787]: E0126 18:21:37.121435 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600\": container with ID starting with e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600 not found: ID does not exist" containerID="e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.121464 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600"} err="failed to get container status \"e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600\": rpc error: code = NotFound desc = could not find container \"e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600\": container with ID starting with e1e601114f62572490a178b743ac20a4f4ff33c0c0e4046a60a47134dceb5600 not found: ID does not exist" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.121481 4787 scope.go:117] "RemoveContainer" containerID="910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76" Jan 26 18:21:37 crc kubenswrapper[4787]: E0126 18:21:37.121712 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76\": container with ID starting with 910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76 not found: ID does not exist" containerID="910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.121756 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76"} err="failed to get container status \"910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76\": rpc error: code = NotFound desc = could not find container \"910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76\": container with ID starting with 910fb5a66b221be307a44439441f5e4ced08e939d34d745b1495a56d4c860c76 not found: ID does not exist" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.124624 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-utilities\") pod \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.124731 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-catalog-content\") pod \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.124915 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnrgf\" (UniqueName: \"kubernetes.io/projected/8f7ca2e0-a566-472b-abdb-73f6cb45f136-kube-api-access-lnrgf\") pod \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\" (UID: \"8f7ca2e0-a566-472b-abdb-73f6cb45f136\") " Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.125667 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-utilities" (OuterVolumeSpecName: "utilities") pod "8f7ca2e0-a566-472b-abdb-73f6cb45f136" (UID: "8f7ca2e0-a566-472b-abdb-73f6cb45f136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.131393 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7ca2e0-a566-472b-abdb-73f6cb45f136-kube-api-access-lnrgf" (OuterVolumeSpecName: "kube-api-access-lnrgf") pod "8f7ca2e0-a566-472b-abdb-73f6cb45f136" (UID: "8f7ca2e0-a566-472b-abdb-73f6cb45f136"). InnerVolumeSpecName "kube-api-access-lnrgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.147500 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f7ca2e0-a566-472b-abdb-73f6cb45f136" (UID: "8f7ca2e0-a566-472b-abdb-73f6cb45f136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.226618 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.226652 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7ca2e0-a566-472b-abdb-73f6cb45f136-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.226716 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnrgf\" (UniqueName: \"kubernetes.io/projected/8f7ca2e0-a566-472b-abdb-73f6cb45f136-kube-api-access-lnrgf\") on node \"crc\" DevicePath \"\"" Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.396677 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm7x"] Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.407029 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm7x"] Jan 26 18:21:37 crc kubenswrapper[4787]: I0126 18:21:37.606073 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" path="/var/lib/kubelet/pods/8f7ca2e0-a566-472b-abdb-73f6cb45f136/volumes" Jan 26 18:22:46 crc kubenswrapper[4787]: I0126 18:22:46.807457 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:22:46 crc kubenswrapper[4787]: I0126 18:22:46.808221 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:23:16 crc kubenswrapper[4787]: I0126 18:23:16.808466 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:23:16 crc kubenswrapper[4787]: I0126 18:23:16.809035 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:23:46 crc kubenswrapper[4787]: I0126 18:23:46.807863 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:23:46 crc kubenswrapper[4787]: I0126 18:23:46.808395 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:23:46 crc kubenswrapper[4787]: I0126 18:23:46.808448 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:23:46 crc kubenswrapper[4787]: I0126 18:23:46.809174 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:23:46 crc kubenswrapper[4787]: I0126 18:23:46.809240 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" gracePeriod=600 Jan 26 18:23:46 crc kubenswrapper[4787]: E0126 18:23:46.937547 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:23:47 crc kubenswrapper[4787]: I0126 18:23:47.230300 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" exitCode=0 Jan 26 18:23:47 crc kubenswrapper[4787]: I0126 18:23:47.230375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0"} Jan 26 18:23:47 crc kubenswrapper[4787]: I0126 18:23:47.230654 4787 scope.go:117] "RemoveContainer" containerID="f6d2015aa2c3c7acc80936469a750581cf2f3a43015de10aec9c940956c0be08" Jan 26 18:23:47 crc kubenswrapper[4787]: I0126 18:23:47.231117 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:23:47 crc kubenswrapper[4787]: E0126 18:23:47.231353 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.906356 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9ptr"] Jan 26 18:23:50 crc kubenswrapper[4787]: E0126 18:23:50.907109 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="extract-content" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.907129 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="extract-content" Jan 26 18:23:50 crc kubenswrapper[4787]: E0126 18:23:50.907145 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="extract-utilities" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.907153 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="extract-utilities" Jan 26 18:23:50 crc kubenswrapper[4787]: E0126 18:23:50.907172 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="registry-server" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.907180 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="registry-server" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.907431 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7ca2e0-a566-472b-abdb-73f6cb45f136" containerName="registry-server" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.909687 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:50 crc kubenswrapper[4787]: I0126 18:23:50.928046 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9ptr"] Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.101244 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrkh\" (UniqueName: \"kubernetes.io/projected/a63323cc-a333-43b0-bdfa-e1b9342cbc53-kube-api-access-ljrkh\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.101346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-catalog-content\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.101407 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-utilities\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.202991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrkh\" (UniqueName: \"kubernetes.io/projected/a63323cc-a333-43b0-bdfa-e1b9342cbc53-kube-api-access-ljrkh\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.203415 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-catalog-content\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.203893 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-catalog-content\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.203987 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-utilities\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.204280 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-utilities\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.221246 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrkh\" (UniqueName: \"kubernetes.io/projected/a63323cc-a333-43b0-bdfa-e1b9342cbc53-kube-api-access-ljrkh\") pod \"redhat-operators-v9ptr\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.230880 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:23:51 crc kubenswrapper[4787]: I0126 18:23:51.669810 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9ptr"] Jan 26 18:23:52 crc kubenswrapper[4787]: I0126 18:23:52.266235 4787 generic.go:334] "Generic (PLEG): container finished" podID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerID="b7d06a3a832e0f7249817832cb92427b2a095bba061e491ad6289d5b855b1800" exitCode=0 Jan 26 18:23:52 crc kubenswrapper[4787]: I0126 18:23:52.266277 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerDied","Data":"b7d06a3a832e0f7249817832cb92427b2a095bba061e491ad6289d5b855b1800"} Jan 26 18:23:52 crc kubenswrapper[4787]: I0126 18:23:52.266550 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerStarted","Data":"018e64d814c4d9acfd0e7d0ae9f2a1d9931055b8336534b3cf1ff55771365276"} Jan 26 18:23:53 crc kubenswrapper[4787]: I0126 18:23:53.273479 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerStarted","Data":"f8959c9d2fb6bf443d7c2d24772aa452160e6ed697c8177a3fe044896dc4f028"} Jan 26 18:23:54 crc kubenswrapper[4787]: I0126 18:23:54.281611 4787 generic.go:334] "Generic (PLEG): container finished" podID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerID="f8959c9d2fb6bf443d7c2d24772aa452160e6ed697c8177a3fe044896dc4f028" exitCode=0 Jan 26 18:23:54 crc kubenswrapper[4787]: I0126 18:23:54.281657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerDied","Data":"f8959c9d2fb6bf443d7c2d24772aa452160e6ed697c8177a3fe044896dc4f028"} Jan 26 18:23:55 crc kubenswrapper[4787]: I0126 18:23:55.290804 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerStarted","Data":"a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405"} Jan 26 18:23:55 crc kubenswrapper[4787]: I0126 18:23:55.309847 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9ptr" podStartSLOduration=2.646286596 podStartE2EDuration="5.309825852s" podCreationTimestamp="2026-01-26 18:23:50 +0000 UTC" firstStartedPulling="2026-01-26 18:23:52.268386278 +0000 UTC m=+2400.975522421" lastFinishedPulling="2026-01-26 18:23:54.931925534 +0000 UTC m=+2403.639061677" observedRunningTime="2026-01-26 18:23:55.307112385 +0000 UTC m=+2404.014248538" watchObservedRunningTime="2026-01-26 18:23:55.309825852 +0000 UTC m=+2404.016961985" Jan 26 18:24:00 crc kubenswrapper[4787]: I0126 18:24:00.588934 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:24:00 crc kubenswrapper[4787]: E0126 18:24:00.589639 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:24:01 crc kubenswrapper[4787]: I0126 18:24:01.232282 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:24:01 crc kubenswrapper[4787]: I0126 18:24:01.232627 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:24:01 crc kubenswrapper[4787]: I0126 18:24:01.281302 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:24:01 crc kubenswrapper[4787]: I0126 18:24:01.365233 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:24:01 crc kubenswrapper[4787]: I0126 18:24:01.515526 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9ptr"] Jan 26 18:24:03 crc kubenswrapper[4787]: I0126 18:24:03.344058 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" containerID="cri-o://a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" gracePeriod=2 Jan 26 18:24:11 crc kubenswrapper[4787]: E0126 18:24:11.232585 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:11 crc kubenswrapper[4787]: E0126 18:24:11.233750 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:11 crc kubenswrapper[4787]: E0126 18:24:11.234202 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:11 crc kubenswrapper[4787]: E0126 18:24:11.234250 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:24:13 crc kubenswrapper[4787]: I0126 18:24:13.590304 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:24:13 crc kubenswrapper[4787]: E0126 18:24:13.591107 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:24:21 crc kubenswrapper[4787]: E0126 18:24:21.232610 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:21 crc kubenswrapper[4787]: E0126 18:24:21.233772 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:21 crc kubenswrapper[4787]: E0126 18:24:21.234622 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:21 crc kubenswrapper[4787]: E0126 18:24:21.234691 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:24:31 crc kubenswrapper[4787]: E0126 18:24:31.233298 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:31 crc kubenswrapper[4787]: E0126 18:24:31.234527 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:31 crc kubenswrapper[4787]: E0126 18:24:31.235204 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:31 crc kubenswrapper[4787]: E0126 18:24:31.235289 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:24:41 crc kubenswrapper[4787]: E0126 18:24:41.232461 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:41 crc kubenswrapper[4787]: E0126 18:24:41.234223 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:41 crc kubenswrapper[4787]: E0126 18:24:41.234888 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:41 crc kubenswrapper[4787]: E0126 18:24:41.235043 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:24:51 crc kubenswrapper[4787]: E0126 18:24:51.233203 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:51 crc kubenswrapper[4787]: E0126 18:24:51.234273 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:51 crc kubenswrapper[4787]: E0126 18:24:51.235112 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:24:51 crc kubenswrapper[4787]: E0126 18:24:51.235696 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:25:01 crc kubenswrapper[4787]: E0126 18:25:01.232445 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:01 crc kubenswrapper[4787]: E0126 18:25:01.233811 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:01 crc kubenswrapper[4787]: E0126 18:25:01.234585 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:01 crc kubenswrapper[4787]: E0126 18:25:01.234668 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:25:11 crc kubenswrapper[4787]: E0126 18:25:11.232622 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:11 crc kubenswrapper[4787]: E0126 18:25:11.233944 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:11 crc kubenswrapper[4787]: E0126 18:25:11.235210 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:11 crc kubenswrapper[4787]: E0126 18:25:11.235291 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:25:14 crc kubenswrapper[4787]: I0126 18:25:14.881016 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" podUID="5fd3204c-f4d7-466e-94b4-8463575086be" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 26 18:25:21 crc kubenswrapper[4787]: E0126 18:25:21.232476 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:21 crc kubenswrapper[4787]: E0126 18:25:21.233136 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:21 crc kubenswrapper[4787]: E0126 18:25:21.234237 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:25:21 crc kubenswrapper[4787]: E0126 18:25:21.234268 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-v9ptr" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:25:22 crc kubenswrapper[4787]: I0126 18:25:22.432022 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9ptr_a63323cc-a333-43b0-bdfa-e1b9342cbc53/registry-server/0.log" Jan 26 18:25:22 crc kubenswrapper[4787]: I0126 18:25:22.434843 4787 generic.go:334] "Generic (PLEG): container finished" podID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" exitCode=-1 Jan 26 18:25:22 crc kubenswrapper[4787]: I0126 18:25:22.434982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerDied","Data":"a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405"} Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.349243 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.443387 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9ptr" event={"ID":"a63323cc-a333-43b0-bdfa-e1b9342cbc53","Type":"ContainerDied","Data":"018e64d814c4d9acfd0e7d0ae9f2a1d9931055b8336534b3cf1ff55771365276"} Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.443442 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9ptr" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.443457 4787 scope.go:117] "RemoveContainer" containerID="a9addc9f179e6cbd1ba4d4910d3e78723337240b23f80055ee1073578dc2a405" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.444215 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:25:23 crc kubenswrapper[4787]: E0126 18:25:23.444514 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.461039 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-utilities\") pod \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.461113 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-catalog-content\") pod \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.461159 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljrkh\" (UniqueName: \"kubernetes.io/projected/a63323cc-a333-43b0-bdfa-e1b9342cbc53-kube-api-access-ljrkh\") pod \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\" (UID: \"a63323cc-a333-43b0-bdfa-e1b9342cbc53\") " Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.463001 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-utilities" (OuterVolumeSpecName: "utilities") pod "a63323cc-a333-43b0-bdfa-e1b9342cbc53" (UID: "a63323cc-a333-43b0-bdfa-e1b9342cbc53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.466288 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63323cc-a333-43b0-bdfa-e1b9342cbc53-kube-api-access-ljrkh" (OuterVolumeSpecName: "kube-api-access-ljrkh") pod "a63323cc-a333-43b0-bdfa-e1b9342cbc53" (UID: "a63323cc-a333-43b0-bdfa-e1b9342cbc53"). InnerVolumeSpecName "kube-api-access-ljrkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.563065 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.563120 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljrkh\" (UniqueName: \"kubernetes.io/projected/a63323cc-a333-43b0-bdfa-e1b9342cbc53-kube-api-access-ljrkh\") on node \"crc\" DevicePath \"\"" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.763268 4787 scope.go:117] "RemoveContainer" containerID="f8959c9d2fb6bf443d7c2d24772aa452160e6ed697c8177a3fe044896dc4f028" Jan 26 18:25:23 crc kubenswrapper[4787]: I0126 18:25:23.789290 4787 scope.go:117] "RemoveContainer" containerID="b7d06a3a832e0f7249817832cb92427b2a095bba061e491ad6289d5b855b1800" Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.452974 4787 generic.go:334] "Generic (PLEG): container finished" podID="5fd3204c-f4d7-466e-94b4-8463575086be" containerID="224a758615b0f95771357008e90f0d4af0b20639e689c5c665c2f4ccdeb3ff29" exitCode=1 Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.452994 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" event={"ID":"5fd3204c-f4d7-466e-94b4-8463575086be","Type":"ContainerDied","Data":"224a758615b0f95771357008e90f0d4af0b20639e689c5c665c2f4ccdeb3ff29"} Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.453494 4787 scope.go:117] "RemoveContainer" containerID="224a758615b0f95771357008e90f0d4af0b20639e689c5c665c2f4ccdeb3ff29" Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.741712 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a63323cc-a333-43b0-bdfa-e1b9342cbc53" (UID: "a63323cc-a333-43b0-bdfa-e1b9342cbc53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.780276 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63323cc-a333-43b0-bdfa-e1b9342cbc53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.880368 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.880419 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.982733 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9ptr"] Jan 26 18:25:24 crc kubenswrapper[4787]: I0126 18:25:24.991044 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9ptr"] Jan 26 18:25:25 crc kubenswrapper[4787]: I0126 18:25:25.461283 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" event={"ID":"5fd3204c-f4d7-466e-94b4-8463575086be","Type":"ContainerStarted","Data":"ae569ac16442ef6ca758769d9f16de62ee2e5f57dcb9df5bf07bfefe1efbd5cb"} Jan 26 18:25:25 crc kubenswrapper[4787]: I0126 18:25:25.461522 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:25:25 crc kubenswrapper[4787]: I0126 18:25:25.600290 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" path="/var/lib/kubelet/pods/a63323cc-a333-43b0-bdfa-e1b9342cbc53/volumes" Jan 26 18:25:34 crc kubenswrapper[4787]: I0126 18:25:34.882527 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-clf9c" Jan 26 18:25:38 crc kubenswrapper[4787]: I0126 18:25:38.589003 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:25:38 crc kubenswrapper[4787]: E0126 18:25:38.589472 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:25:49 crc kubenswrapper[4787]: I0126 18:25:49.589240 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:25:49 crc kubenswrapper[4787]: E0126 18:25:49.590105 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:26:00 crc kubenswrapper[4787]: I0126 18:26:00.590079 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:26:00 crc kubenswrapper[4787]: E0126 18:26:00.590801 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.208616 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9649p"] Jan 26 18:26:01 crc kubenswrapper[4787]: E0126 18:26:01.209107 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="extract-utilities" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.209137 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="extract-utilities" Jan 26 18:26:01 crc kubenswrapper[4787]: E0126 18:26:01.209159 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="extract-content" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.209169 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="extract-content" Jan 26 18:26:01 crc kubenswrapper[4787]: E0126 18:26:01.209190 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.209200 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.209399 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63323cc-a333-43b0-bdfa-e1b9342cbc53" containerName="registry-server" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.211305 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.215748 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9649p"] Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.358942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-catalog-content\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.359052 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-utilities\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.359071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspjl\" (UniqueName: \"kubernetes.io/projected/19569f7c-97c8-4572-a766-343b805d6dab-kube-api-access-bspjl\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.460924 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-catalog-content\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.461275 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-utilities\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.461384 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bspjl\" (UniqueName: \"kubernetes.io/projected/19569f7c-97c8-4572-a766-343b805d6dab-kube-api-access-bspjl\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.461574 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-catalog-content\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.461728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-utilities\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.481753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bspjl\" (UniqueName: \"kubernetes.io/projected/19569f7c-97c8-4572-a766-343b805d6dab-kube-api-access-bspjl\") pod \"certified-operators-9649p\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:01 crc kubenswrapper[4787]: I0126 18:26:01.546005 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:02 crc kubenswrapper[4787]: I0126 18:26:02.015651 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9649p"] Jan 26 18:26:02 crc kubenswrapper[4787]: I0126 18:26:02.711890 4787 generic.go:334] "Generic (PLEG): container finished" podID="19569f7c-97c8-4572-a766-343b805d6dab" containerID="8be6d9d73ede695b53804152337344e3d1ce0ee8e76d4e9993ae19aad31bb369" exitCode=0 Jan 26 18:26:02 crc kubenswrapper[4787]: I0126 18:26:02.711959 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerDied","Data":"8be6d9d73ede695b53804152337344e3d1ce0ee8e76d4e9993ae19aad31bb369"} Jan 26 18:26:02 crc kubenswrapper[4787]: I0126 18:26:02.712017 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerStarted","Data":"f8c605276ce6656bb32bf7bcb8d0a9701156d9497255702f43b69c41232f433b"} Jan 26 18:26:03 crc kubenswrapper[4787]: I0126 18:26:03.721870 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerStarted","Data":"ff0d20fe68b855b08c5489d483fef97f8eaedcc3ffc20b76193131950f701258"} Jan 26 18:26:03 crc kubenswrapper[4787]: E0126 18:26:03.971017 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19569f7c_97c8_4572_a766_343b805d6dab.slice/crio-conmon-ff0d20fe68b855b08c5489d483fef97f8eaedcc3ffc20b76193131950f701258.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19569f7c_97c8_4572_a766_343b805d6dab.slice/crio-ff0d20fe68b855b08c5489d483fef97f8eaedcc3ffc20b76193131950f701258.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:26:04 crc kubenswrapper[4787]: I0126 18:26:04.736726 4787 generic.go:334] "Generic (PLEG): container finished" podID="19569f7c-97c8-4572-a766-343b805d6dab" containerID="ff0d20fe68b855b08c5489d483fef97f8eaedcc3ffc20b76193131950f701258" exitCode=0 Jan 26 18:26:04 crc kubenswrapper[4787]: I0126 18:26:04.736791 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerDied","Data":"ff0d20fe68b855b08c5489d483fef97f8eaedcc3ffc20b76193131950f701258"} Jan 26 18:26:05 crc kubenswrapper[4787]: I0126 18:26:05.746252 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerStarted","Data":"c38703269b5001358af6025d8a0db8462609fea8a064192fdb19e6d47490dc58"} Jan 26 18:26:05 crc kubenswrapper[4787]: I0126 18:26:05.763529 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9649p" podStartSLOduration=2.355441033 podStartE2EDuration="4.763506406s" podCreationTimestamp="2026-01-26 18:26:01 +0000 UTC" firstStartedPulling="2026-01-26 18:26:02.713754702 +0000 UTC m=+2531.420890845" lastFinishedPulling="2026-01-26 18:26:05.121820095 +0000 UTC m=+2533.828956218" observedRunningTime="2026-01-26 18:26:05.76203495 +0000 UTC m=+2534.469171093" watchObservedRunningTime="2026-01-26 18:26:05.763506406 +0000 UTC m=+2534.470642549" Jan 26 18:26:11 crc kubenswrapper[4787]: I0126 18:26:11.547730 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:11 crc kubenswrapper[4787]: I0126 18:26:11.548124 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:11 crc kubenswrapper[4787]: I0126 18:26:11.604203 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:11 crc kubenswrapper[4787]: I0126 18:26:11.869876 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:11 crc kubenswrapper[4787]: I0126 18:26:11.919972 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9649p"] Jan 26 18:26:12 crc kubenswrapper[4787]: I0126 18:26:12.588860 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:26:12 crc kubenswrapper[4787]: E0126 18:26:12.589185 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:26:13 crc kubenswrapper[4787]: I0126 18:26:13.823983 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9649p" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="registry-server" containerID="cri-o://c38703269b5001358af6025d8a0db8462609fea8a064192fdb19e6d47490dc58" gracePeriod=2 Jan 26 18:26:14 crc kubenswrapper[4787]: I0126 18:26:14.839197 4787 generic.go:334] "Generic (PLEG): container finished" podID="19569f7c-97c8-4572-a766-343b805d6dab" containerID="c38703269b5001358af6025d8a0db8462609fea8a064192fdb19e6d47490dc58" exitCode=0 Jan 26 18:26:14 crc kubenswrapper[4787]: I0126 18:26:14.839266 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerDied","Data":"c38703269b5001358af6025d8a0db8462609fea8a064192fdb19e6d47490dc58"} Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.319901 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.462309 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-utilities\") pod \"19569f7c-97c8-4572-a766-343b805d6dab\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.462481 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bspjl\" (UniqueName: \"kubernetes.io/projected/19569f7c-97c8-4572-a766-343b805d6dab-kube-api-access-bspjl\") pod \"19569f7c-97c8-4572-a766-343b805d6dab\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.462574 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-catalog-content\") pod \"19569f7c-97c8-4572-a766-343b805d6dab\" (UID: \"19569f7c-97c8-4572-a766-343b805d6dab\") " Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.463382 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-utilities" (OuterVolumeSpecName: "utilities") pod "19569f7c-97c8-4572-a766-343b805d6dab" (UID: "19569f7c-97c8-4572-a766-343b805d6dab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.469164 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19569f7c-97c8-4572-a766-343b805d6dab-kube-api-access-bspjl" (OuterVolumeSpecName: "kube-api-access-bspjl") pod "19569f7c-97c8-4572-a766-343b805d6dab" (UID: "19569f7c-97c8-4572-a766-343b805d6dab"). InnerVolumeSpecName "kube-api-access-bspjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.511460 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19569f7c-97c8-4572-a766-343b805d6dab" (UID: "19569f7c-97c8-4572-a766-343b805d6dab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.564231 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bspjl\" (UniqueName: \"kubernetes.io/projected/19569f7c-97c8-4572-a766-343b805d6dab-kube-api-access-bspjl\") on node \"crc\" DevicePath \"\"" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.564280 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.564312 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19569f7c-97c8-4572-a766-343b805d6dab-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.849221 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9649p" event={"ID":"19569f7c-97c8-4572-a766-343b805d6dab","Type":"ContainerDied","Data":"f8c605276ce6656bb32bf7bcb8d0a9701156d9497255702f43b69c41232f433b"} Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.849268 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9649p" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.849281 4787 scope.go:117] "RemoveContainer" containerID="c38703269b5001358af6025d8a0db8462609fea8a064192fdb19e6d47490dc58" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.876001 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9649p"] Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.879928 4787 scope.go:117] "RemoveContainer" containerID="ff0d20fe68b855b08c5489d483fef97f8eaedcc3ffc20b76193131950f701258" Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.882928 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9649p"] Jan 26 18:26:15 crc kubenswrapper[4787]: I0126 18:26:15.900649 4787 scope.go:117] "RemoveContainer" containerID="8be6d9d73ede695b53804152337344e3d1ce0ee8e76d4e9993ae19aad31bb369" Jan 26 18:26:17 crc kubenswrapper[4787]: I0126 18:26:17.599890 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19569f7c-97c8-4572-a766-343b805d6dab" path="/var/lib/kubelet/pods/19569f7c-97c8-4572-a766-343b805d6dab/volumes" Jan 26 18:26:25 crc kubenswrapper[4787]: I0126 18:26:25.590093 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:26:25 crc kubenswrapper[4787]: E0126 18:26:25.591040 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:26:38 crc kubenswrapper[4787]: I0126 18:26:38.589849 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:26:38 crc kubenswrapper[4787]: E0126 18:26:38.590667 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:26:50 crc kubenswrapper[4787]: I0126 18:26:50.589394 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:26:50 crc kubenswrapper[4787]: E0126 18:26:50.590189 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:27:04 crc kubenswrapper[4787]: I0126 18:27:04.590120 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:27:04 crc kubenswrapper[4787]: E0126 18:27:04.591128 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:27:19 crc kubenswrapper[4787]: I0126 18:27:19.589852 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:27:19 crc kubenswrapper[4787]: E0126 18:27:19.590563 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:27:33 crc kubenswrapper[4787]: I0126 18:27:33.588986 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:27:33 crc kubenswrapper[4787]: E0126 18:27:33.589590 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:27:46 crc kubenswrapper[4787]: I0126 18:27:46.588650 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:27:46 crc kubenswrapper[4787]: E0126 18:27:46.589348 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:27:58 crc kubenswrapper[4787]: I0126 18:27:58.589123 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:27:58 crc kubenswrapper[4787]: E0126 18:27:58.589969 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:28:10 crc kubenswrapper[4787]: I0126 18:28:10.590255 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:28:10 crc kubenswrapper[4787]: E0126 18:28:10.590992 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:28:21 crc kubenswrapper[4787]: I0126 18:28:21.595299 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:28:21 crc kubenswrapper[4787]: E0126 18:28:21.596295 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:28:33 crc kubenswrapper[4787]: I0126 18:28:33.589848 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:28:33 crc kubenswrapper[4787]: E0126 18:28:33.590796 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:28:48 crc kubenswrapper[4787]: I0126 18:28:48.589482 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:28:48 crc kubenswrapper[4787]: I0126 18:28:48.979390 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"28c493d40c5a3958008e3ffae3af8ecbbb5beb691f60cdc699115ca72ba45c30"} Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.158612 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq"] Jan 26 18:30:00 crc kubenswrapper[4787]: E0126 18:30:00.159536 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="extract-content" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.159553 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="extract-content" Jan 26 18:30:00 crc kubenswrapper[4787]: E0126 18:30:00.159584 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="extract-utilities" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.159595 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="extract-utilities" Jan 26 18:30:00 crc kubenswrapper[4787]: E0126 18:30:00.159611 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="registry-server" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.159620 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="registry-server" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.159804 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="19569f7c-97c8-4572-a766-343b805d6dab" containerName="registry-server" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.160497 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.163222 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.163377 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.170998 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq"] Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.202056 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c470f855-7b78-4696-a92a-f499f0320def-secret-volume\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.202099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c470f855-7b78-4696-a92a-f499f0320def-config-volume\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.202140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbb4f\" (UniqueName: \"kubernetes.io/projected/c470f855-7b78-4696-a92a-f499f0320def-kube-api-access-cbb4f\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.303914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c470f855-7b78-4696-a92a-f499f0320def-secret-volume\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.304034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c470f855-7b78-4696-a92a-f499f0320def-config-volume\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.304107 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbb4f\" (UniqueName: \"kubernetes.io/projected/c470f855-7b78-4696-a92a-f499f0320def-kube-api-access-cbb4f\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.305943 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c470f855-7b78-4696-a92a-f499f0320def-config-volume\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.314251 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c470f855-7b78-4696-a92a-f499f0320def-secret-volume\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.331899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbb4f\" (UniqueName: \"kubernetes.io/projected/c470f855-7b78-4696-a92a-f499f0320def-kube-api-access-cbb4f\") pod \"collect-profiles-29490870-xkdwq\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.483773 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:00 crc kubenswrapper[4787]: I0126 18:30:00.892011 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq"] Jan 26 18:30:01 crc kubenswrapper[4787]: I0126 18:30:01.623904 4787 generic.go:334] "Generic (PLEG): container finished" podID="c470f855-7b78-4696-a92a-f499f0320def" containerID="ea04f13cd8f3ae5b5ea4bc9f09b85351eaf43bdf8c2e5d77ab6ef07b72adff6d" exitCode=0 Jan 26 18:30:01 crc kubenswrapper[4787]: I0126 18:30:01.624023 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" event={"ID":"c470f855-7b78-4696-a92a-f499f0320def","Type":"ContainerDied","Data":"ea04f13cd8f3ae5b5ea4bc9f09b85351eaf43bdf8c2e5d77ab6ef07b72adff6d"} Jan 26 18:30:01 crc kubenswrapper[4787]: I0126 18:30:01.624229 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" event={"ID":"c470f855-7b78-4696-a92a-f499f0320def","Type":"ContainerStarted","Data":"a846fcaed5ba43134be3d36990cb9fac6606d83b8e43e8dcb3df74834c1f526b"} Jan 26 18:30:02 crc kubenswrapper[4787]: I0126 18:30:02.931902 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.052834 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c470f855-7b78-4696-a92a-f499f0320def-config-volume\") pod \"c470f855-7b78-4696-a92a-f499f0320def\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.052915 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbb4f\" (UniqueName: \"kubernetes.io/projected/c470f855-7b78-4696-a92a-f499f0320def-kube-api-access-cbb4f\") pod \"c470f855-7b78-4696-a92a-f499f0320def\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.052959 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c470f855-7b78-4696-a92a-f499f0320def-secret-volume\") pod \"c470f855-7b78-4696-a92a-f499f0320def\" (UID: \"c470f855-7b78-4696-a92a-f499f0320def\") " Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.054072 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c470f855-7b78-4696-a92a-f499f0320def-config-volume" (OuterVolumeSpecName: "config-volume") pod "c470f855-7b78-4696-a92a-f499f0320def" (UID: "c470f855-7b78-4696-a92a-f499f0320def"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.057816 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c470f855-7b78-4696-a92a-f499f0320def-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c470f855-7b78-4696-a92a-f499f0320def" (UID: "c470f855-7b78-4696-a92a-f499f0320def"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.058321 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c470f855-7b78-4696-a92a-f499f0320def-kube-api-access-cbb4f" (OuterVolumeSpecName: "kube-api-access-cbb4f") pod "c470f855-7b78-4696-a92a-f499f0320def" (UID: "c470f855-7b78-4696-a92a-f499f0320def"). InnerVolumeSpecName "kube-api-access-cbb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.154831 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c470f855-7b78-4696-a92a-f499f0320def-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.154893 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbb4f\" (UniqueName: \"kubernetes.io/projected/c470f855-7b78-4696-a92a-f499f0320def-kube-api-access-cbb4f\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.154906 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c470f855-7b78-4696-a92a-f499f0320def-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.674197 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" event={"ID":"c470f855-7b78-4696-a92a-f499f0320def","Type":"ContainerDied","Data":"a846fcaed5ba43134be3d36990cb9fac6606d83b8e43e8dcb3df74834c1f526b"} Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.674237 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a846fcaed5ba43134be3d36990cb9fac6606d83b8e43e8dcb3df74834c1f526b" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.674314 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq" Jan 26 18:30:03 crc kubenswrapper[4787]: I0126 18:30:03.996913 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74"] Jan 26 18:30:04 crc kubenswrapper[4787]: I0126 18:30:04.001463 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pdm74"] Jan 26 18:30:05 crc kubenswrapper[4787]: I0126 18:30:05.602199 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e28a309-ee20-408a-b0a1-d1c457139803" path="/var/lib/kubelet/pods/2e28a309-ee20-408a-b0a1-d1c457139803/volumes" Jan 26 18:30:57 crc kubenswrapper[4787]: I0126 18:30:57.182100 4787 scope.go:117] "RemoveContainer" containerID="f5fa366160dea108a479e35d70501195180c7b65c253f9f0ed485ff780a7f647" Jan 26 18:31:16 crc kubenswrapper[4787]: I0126 18:31:16.807984 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:31:16 crc kubenswrapper[4787]: I0126 18:31:16.808407 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.089937 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfxr8"] Jan 26 18:31:33 crc kubenswrapper[4787]: E0126 18:31:33.094586 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c470f855-7b78-4696-a92a-f499f0320def" containerName="collect-profiles" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.094625 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c470f855-7b78-4696-a92a-f499f0320def" containerName="collect-profiles" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.094977 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c470f855-7b78-4696-a92a-f499f0320def" containerName="collect-profiles" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.096226 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.123239 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfxr8"] Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.219555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-catalog-content\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.219608 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzx76\" (UniqueName: \"kubernetes.io/projected/fa0405b1-a870-4ddb-a449-72ac9d9a0288-kube-api-access-tzx76\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.219671 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-utilities\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.321525 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-catalog-content\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.321583 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzx76\" (UniqueName: \"kubernetes.io/projected/fa0405b1-a870-4ddb-a449-72ac9d9a0288-kube-api-access-tzx76\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.321654 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-utilities\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.322090 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-catalog-content\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.322216 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-utilities\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.354715 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzx76\" (UniqueName: \"kubernetes.io/projected/fa0405b1-a870-4ddb-a449-72ac9d9a0288-kube-api-access-tzx76\") pod \"redhat-marketplace-mfxr8\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.420623 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:33 crc kubenswrapper[4787]: I0126 18:31:33.892023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfxr8"] Jan 26 18:31:34 crc kubenswrapper[4787]: I0126 18:31:34.422155 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerID="aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5" exitCode=0 Jan 26 18:31:34 crc kubenswrapper[4787]: I0126 18:31:34.422235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfxr8" event={"ID":"fa0405b1-a870-4ddb-a449-72ac9d9a0288","Type":"ContainerDied","Data":"aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5"} Jan 26 18:31:34 crc kubenswrapper[4787]: I0126 18:31:34.423441 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfxr8" event={"ID":"fa0405b1-a870-4ddb-a449-72ac9d9a0288","Type":"ContainerStarted","Data":"a05c61b864aa86f70b8404378a45e79f0b5d2919bb376350213af4d743e76dd1"} Jan 26 18:31:34 crc kubenswrapper[4787]: I0126 18:31:34.424937 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:31:36 crc kubenswrapper[4787]: I0126 18:31:36.442164 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerID="18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502" exitCode=0 Jan 26 18:31:36 crc kubenswrapper[4787]: I0126 18:31:36.442218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfxr8" event={"ID":"fa0405b1-a870-4ddb-a449-72ac9d9a0288","Type":"ContainerDied","Data":"18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502"} Jan 26 18:31:37 crc kubenswrapper[4787]: I0126 18:31:37.450987 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfxr8" event={"ID":"fa0405b1-a870-4ddb-a449-72ac9d9a0288","Type":"ContainerStarted","Data":"96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d"} Jan 26 18:31:37 crc kubenswrapper[4787]: I0126 18:31:37.472437 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfxr8" podStartSLOduration=2.077708851 podStartE2EDuration="4.472412187s" podCreationTimestamp="2026-01-26 18:31:33 +0000 UTC" firstStartedPulling="2026-01-26 18:31:34.424632801 +0000 UTC m=+2863.131768944" lastFinishedPulling="2026-01-26 18:31:36.819336147 +0000 UTC m=+2865.526472280" observedRunningTime="2026-01-26 18:31:37.468453141 +0000 UTC m=+2866.175589274" watchObservedRunningTime="2026-01-26 18:31:37.472412187 +0000 UTC m=+2866.179548320" Jan 26 18:31:38 crc kubenswrapper[4787]: I0126 18:31:38.871128 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5t8pk"] Jan 26 18:31:38 crc kubenswrapper[4787]: I0126 18:31:38.872741 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:38 crc kubenswrapper[4787]: I0126 18:31:38.884647 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5t8pk"] Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.002476 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-utilities\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.002578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62swh\" (UniqueName: \"kubernetes.io/projected/97420ad9-cb4f-49ce-891a-5622d410bfc2-kube-api-access-62swh\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.002648 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-catalog-content\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.104337 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62swh\" (UniqueName: \"kubernetes.io/projected/97420ad9-cb4f-49ce-891a-5622d410bfc2-kube-api-access-62swh\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.104409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-catalog-content\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.104515 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-utilities\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.104902 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-catalog-content\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.104938 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-utilities\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.127083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62swh\" (UniqueName: \"kubernetes.io/projected/97420ad9-cb4f-49ce-891a-5622d410bfc2-kube-api-access-62swh\") pod \"community-operators-5t8pk\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.190816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:39 crc kubenswrapper[4787]: I0126 18:31:39.655597 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5t8pk"] Jan 26 18:31:40 crc kubenswrapper[4787]: I0126 18:31:40.473039 4787 generic.go:334] "Generic (PLEG): container finished" podID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerID="e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c" exitCode=0 Jan 26 18:31:40 crc kubenswrapper[4787]: I0126 18:31:40.473151 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5t8pk" event={"ID":"97420ad9-cb4f-49ce-891a-5622d410bfc2","Type":"ContainerDied","Data":"e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c"} Jan 26 18:31:40 crc kubenswrapper[4787]: I0126 18:31:40.473438 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5t8pk" event={"ID":"97420ad9-cb4f-49ce-891a-5622d410bfc2","Type":"ContainerStarted","Data":"62c453ab3d4ee22b28a2f115ebb11b6055abeaabefe737d21224f6c10f9aa663"} Jan 26 18:31:42 crc kubenswrapper[4787]: I0126 18:31:42.496628 4787 generic.go:334] "Generic (PLEG): container finished" podID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerID="25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554" exitCode=0 Jan 26 18:31:42 crc kubenswrapper[4787]: I0126 18:31:42.496702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5t8pk" event={"ID":"97420ad9-cb4f-49ce-891a-5622d410bfc2","Type":"ContainerDied","Data":"25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554"} Jan 26 18:31:43 crc kubenswrapper[4787]: I0126 18:31:43.421660 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:43 crc kubenswrapper[4787]: I0126 18:31:43.422327 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:43 crc kubenswrapper[4787]: I0126 18:31:43.474793 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:43 crc kubenswrapper[4787]: I0126 18:31:43.524630 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5t8pk" event={"ID":"97420ad9-cb4f-49ce-891a-5622d410bfc2","Type":"ContainerStarted","Data":"4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2"} Jan 26 18:31:43 crc kubenswrapper[4787]: I0126 18:31:43.543701 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5t8pk" podStartSLOduration=3.0128087 podStartE2EDuration="5.543678754s" podCreationTimestamp="2026-01-26 18:31:38 +0000 UTC" firstStartedPulling="2026-01-26 18:31:40.474847682 +0000 UTC m=+2869.181983815" lastFinishedPulling="2026-01-26 18:31:43.005717736 +0000 UTC m=+2871.712853869" observedRunningTime="2026-01-26 18:31:43.541959531 +0000 UTC m=+2872.249095664" watchObservedRunningTime="2026-01-26 18:31:43.543678754 +0000 UTC m=+2872.250814887" Jan 26 18:31:43 crc kubenswrapper[4787]: I0126 18:31:43.587376 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:45 crc kubenswrapper[4787]: I0126 18:31:45.654212 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfxr8"] Jan 26 18:31:45 crc kubenswrapper[4787]: I0126 18:31:45.654521 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mfxr8" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="registry-server" containerID="cri-o://96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d" gracePeriod=2 Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.272383 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.414206 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-catalog-content\") pod \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.414259 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-utilities\") pod \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.414288 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzx76\" (UniqueName: \"kubernetes.io/projected/fa0405b1-a870-4ddb-a449-72ac9d9a0288-kube-api-access-tzx76\") pod \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\" (UID: \"fa0405b1-a870-4ddb-a449-72ac9d9a0288\") " Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.415436 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-utilities" (OuterVolumeSpecName: "utilities") pod "fa0405b1-a870-4ddb-a449-72ac9d9a0288" (UID: "fa0405b1-a870-4ddb-a449-72ac9d9a0288"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.422270 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0405b1-a870-4ddb-a449-72ac9d9a0288-kube-api-access-tzx76" (OuterVolumeSpecName: "kube-api-access-tzx76") pod "fa0405b1-a870-4ddb-a449-72ac9d9a0288" (UID: "fa0405b1-a870-4ddb-a449-72ac9d9a0288"). InnerVolumeSpecName "kube-api-access-tzx76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.438219 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa0405b1-a870-4ddb-a449-72ac9d9a0288" (UID: "fa0405b1-a870-4ddb-a449-72ac9d9a0288"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.516566 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.516617 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0405b1-a870-4ddb-a449-72ac9d9a0288-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.516637 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzx76\" (UniqueName: \"kubernetes.io/projected/fa0405b1-a870-4ddb-a449-72ac9d9a0288-kube-api-access-tzx76\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.549437 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerID="96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d" exitCode=0 Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.549506 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfxr8" event={"ID":"fa0405b1-a870-4ddb-a449-72ac9d9a0288","Type":"ContainerDied","Data":"96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d"} Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.549558 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfxr8" event={"ID":"fa0405b1-a870-4ddb-a449-72ac9d9a0288","Type":"ContainerDied","Data":"a05c61b864aa86f70b8404378a45e79f0b5d2919bb376350213af4d743e76dd1"} Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.549582 4787 scope.go:117] "RemoveContainer" containerID="96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.549702 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfxr8" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.574789 4787 scope.go:117] "RemoveContainer" containerID="18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.586859 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfxr8"] Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.596192 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfxr8"] Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.597948 4787 scope.go:117] "RemoveContainer" containerID="aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.618626 4787 scope.go:117] "RemoveContainer" containerID="96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d" Jan 26 18:31:46 crc kubenswrapper[4787]: E0126 18:31:46.619309 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d\": container with ID starting with 96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d not found: ID does not exist" containerID="96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.619354 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d"} err="failed to get container status \"96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d\": rpc error: code = NotFound desc = could not find container \"96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d\": container with ID starting with 96bad2764f26f2f2f9ea736e94e376de6a66aceeaaed279b46c0f3fde730656d not found: ID does not exist" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.619382 4787 scope.go:117] "RemoveContainer" containerID="18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502" Jan 26 18:31:46 crc kubenswrapper[4787]: E0126 18:31:46.619752 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502\": container with ID starting with 18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502 not found: ID does not exist" containerID="18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.619837 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502"} err="failed to get container status \"18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502\": rpc error: code = NotFound desc = could not find container \"18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502\": container with ID starting with 18592c3f032dd4e42acf96d347b6c826a4f695ff73eb5bdecfa88488ffc28502 not found: ID does not exist" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.619871 4787 scope.go:117] "RemoveContainer" containerID="aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5" Jan 26 18:31:46 crc kubenswrapper[4787]: E0126 18:31:46.620347 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5\": container with ID starting with aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5 not found: ID does not exist" containerID="aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.620379 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5"} err="failed to get container status \"aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5\": rpc error: code = NotFound desc = could not find container \"aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5\": container with ID starting with aa8a1cab4e0883e287a5a7bee4e45e818c7f607ab9cd3cdfcca88f10105f11e5 not found: ID does not exist" Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.807826 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:31:46 crc kubenswrapper[4787]: I0126 18:31:46.807888 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:31:47 crc kubenswrapper[4787]: I0126 18:31:47.616329 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" path="/var/lib/kubelet/pods/fa0405b1-a870-4ddb-a449-72ac9d9a0288/volumes" Jan 26 18:31:49 crc kubenswrapper[4787]: I0126 18:31:49.191805 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:49 crc kubenswrapper[4787]: I0126 18:31:49.192049 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:49 crc kubenswrapper[4787]: I0126 18:31:49.262218 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:49 crc kubenswrapper[4787]: I0126 18:31:49.638433 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:49 crc kubenswrapper[4787]: I0126 18:31:49.858216 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5t8pk"] Jan 26 18:31:51 crc kubenswrapper[4787]: I0126 18:31:51.593903 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5t8pk" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="registry-server" containerID="cri-o://4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2" gracePeriod=2 Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.569870 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.609626 4787 generic.go:334] "Generic (PLEG): container finished" podID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerID="4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2" exitCode=0 Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.609670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5t8pk" event={"ID":"97420ad9-cb4f-49ce-891a-5622d410bfc2","Type":"ContainerDied","Data":"4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2"} Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.609696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5t8pk" event={"ID":"97420ad9-cb4f-49ce-891a-5622d410bfc2","Type":"ContainerDied","Data":"62c453ab3d4ee22b28a2f115ebb11b6055abeaabefe737d21224f6c10f9aa663"} Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.609712 4787 scope.go:117] "RemoveContainer" containerID="4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.609715 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5t8pk" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.635182 4787 scope.go:117] "RemoveContainer" containerID="25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.660186 4787 scope.go:117] "RemoveContainer" containerID="e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.678267 4787 scope.go:117] "RemoveContainer" containerID="4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2" Jan 26 18:31:52 crc kubenswrapper[4787]: E0126 18:31:52.678657 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2\": container with ID starting with 4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2 not found: ID does not exist" containerID="4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.678699 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2"} err="failed to get container status \"4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2\": rpc error: code = NotFound desc = could not find container \"4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2\": container with ID starting with 4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2 not found: ID does not exist" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.678726 4787 scope.go:117] "RemoveContainer" containerID="25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554" Jan 26 18:31:52 crc kubenswrapper[4787]: E0126 18:31:52.679038 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554\": container with ID starting with 25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554 not found: ID does not exist" containerID="25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.679072 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554"} err="failed to get container status \"25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554\": rpc error: code = NotFound desc = could not find container \"25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554\": container with ID starting with 25f31828cca4b5ff82a839c17a00d4f5f33c6cbf946c09ddd4dcf27ee8673554 not found: ID does not exist" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.679100 4787 scope.go:117] "RemoveContainer" containerID="e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c" Jan 26 18:31:52 crc kubenswrapper[4787]: E0126 18:31:52.679338 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c\": container with ID starting with e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c not found: ID does not exist" containerID="e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.679362 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c"} err="failed to get container status \"e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c\": rpc error: code = NotFound desc = could not find container \"e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c\": container with ID starting with e88585907e156544c8438f1cfafd8af6e3b66b917f88c0e1cb105fa6dcd87f5c not found: ID does not exist" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.710263 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-catalog-content\") pod \"97420ad9-cb4f-49ce-891a-5622d410bfc2\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.710348 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62swh\" (UniqueName: \"kubernetes.io/projected/97420ad9-cb4f-49ce-891a-5622d410bfc2-kube-api-access-62swh\") pod \"97420ad9-cb4f-49ce-891a-5622d410bfc2\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.710463 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-utilities\") pod \"97420ad9-cb4f-49ce-891a-5622d410bfc2\" (UID: \"97420ad9-cb4f-49ce-891a-5622d410bfc2\") " Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.711892 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-utilities" (OuterVolumeSpecName: "utilities") pod "97420ad9-cb4f-49ce-891a-5622d410bfc2" (UID: "97420ad9-cb4f-49ce-891a-5622d410bfc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.717719 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97420ad9-cb4f-49ce-891a-5622d410bfc2-kube-api-access-62swh" (OuterVolumeSpecName: "kube-api-access-62swh") pod "97420ad9-cb4f-49ce-891a-5622d410bfc2" (UID: "97420ad9-cb4f-49ce-891a-5622d410bfc2"). InnerVolumeSpecName "kube-api-access-62swh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.785623 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97420ad9-cb4f-49ce-891a-5622d410bfc2" (UID: "97420ad9-cb4f-49ce-891a-5622d410bfc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.812735 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.812781 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62swh\" (UniqueName: \"kubernetes.io/projected/97420ad9-cb4f-49ce-891a-5622d410bfc2-kube-api-access-62swh\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.812796 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97420ad9-cb4f-49ce-891a-5622d410bfc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.960283 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5t8pk"] Jan 26 18:31:52 crc kubenswrapper[4787]: I0126 18:31:52.968534 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5t8pk"] Jan 26 18:31:53 crc kubenswrapper[4787]: I0126 18:31:53.600382 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" path="/var/lib/kubelet/pods/97420ad9-cb4f-49ce-891a-5622d410bfc2/volumes" Jan 26 18:32:00 crc kubenswrapper[4787]: E0126 18:32:00.891824 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97420ad9_cb4f_49ce_891a_5622d410bfc2.slice/crio-4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:32:11 crc kubenswrapper[4787]: E0126 18:32:11.103991 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97420ad9_cb4f_49ce_891a_5622d410bfc2.slice/crio-4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:32:16 crc kubenswrapper[4787]: I0126 18:32:16.808266 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:32:16 crc kubenswrapper[4787]: I0126 18:32:16.808685 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:32:16 crc kubenswrapper[4787]: I0126 18:32:16.808730 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:32:16 crc kubenswrapper[4787]: I0126 18:32:16.809330 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28c493d40c5a3958008e3ffae3af8ecbbb5beb691f60cdc699115ca72ba45c30"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:32:16 crc kubenswrapper[4787]: I0126 18:32:16.809382 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://28c493d40c5a3958008e3ffae3af8ecbbb5beb691f60cdc699115ca72ba45c30" gracePeriod=600 Jan 26 18:32:17 crc kubenswrapper[4787]: I0126 18:32:17.857302 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="28c493d40c5a3958008e3ffae3af8ecbbb5beb691f60cdc699115ca72ba45c30" exitCode=0 Jan 26 18:32:17 crc kubenswrapper[4787]: I0126 18:32:17.858410 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"28c493d40c5a3958008e3ffae3af8ecbbb5beb691f60cdc699115ca72ba45c30"} Jan 26 18:32:17 crc kubenswrapper[4787]: I0126 18:32:17.858501 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e"} Jan 26 18:32:17 crc kubenswrapper[4787]: I0126 18:32:17.858571 4787 scope.go:117] "RemoveContainer" containerID="7850dd423485fc53b37243ad90c5e5592ab78e2665efbb3ffa5f8fce35a3dcb0" Jan 26 18:32:21 crc kubenswrapper[4787]: E0126 18:32:21.327230 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97420ad9_cb4f_49ce_891a_5622d410bfc2.slice/crio-4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:32:31 crc kubenswrapper[4787]: E0126 18:32:31.546271 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97420ad9_cb4f_49ce_891a_5622d410bfc2.slice/crio-4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:32:41 crc kubenswrapper[4787]: E0126 18:32:41.717416 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97420ad9_cb4f_49ce_891a_5622d410bfc2.slice/crio-4611a63b4b5a099ebc2305b1cadace519fcb1b6e7d810268ceba0f167e0606a2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:32:51 crc kubenswrapper[4787]: E0126 18:32:51.636297 4787 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c7366ac62bf3e75285caf068aa851f9032002ac7ec85f0831ee2423e7f5ed4a4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c7366ac62bf3e75285caf068aa851f9032002ac7ec85f0831ee2423e7f5ed4a4/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-marketplace_community-operators-5t8pk_97420ad9-cb4f-49ce-891a-5622d410bfc2/registry-server/0.log" to get inode usage: stat /var/log/pods/openshift-marketplace_community-operators-5t8pk_97420ad9-cb4f-49ce-891a-5622d410bfc2/registry-server/0.log: no such file or directory Jan 26 18:34:46 crc kubenswrapper[4787]: I0126 18:34:46.808588 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:34:46 crc kubenswrapper[4787]: I0126 18:34:46.809256 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.678222 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8zwdd"] Jan 26 18:34:53 crc kubenswrapper[4787]: E0126 18:34:53.679253 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="registry-server" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679270 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="registry-server" Jan 26 18:34:53 crc kubenswrapper[4787]: E0126 18:34:53.679288 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="extract-utilities" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679296 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="extract-utilities" Jan 26 18:34:53 crc kubenswrapper[4787]: E0126 18:34:53.679313 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="extract-content" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679321 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="extract-content" Jan 26 18:34:53 crc kubenswrapper[4787]: E0126 18:34:53.679331 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="extract-content" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679336 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="extract-content" Jan 26 18:34:53 crc kubenswrapper[4787]: E0126 18:34:53.679344 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="extract-utilities" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679349 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="extract-utilities" Jan 26 18:34:53 crc kubenswrapper[4787]: E0126 18:34:53.679363 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="registry-server" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679371 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="registry-server" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679517 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="97420ad9-cb4f-49ce-891a-5622d410bfc2" containerName="registry-server" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.679535 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0405b1-a870-4ddb-a449-72ac9d9a0288" containerName="registry-server" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.680616 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.710764 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8zwdd"] Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.812673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-catalog-content\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.812760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-utilities\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.812825 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lss29\" (UniqueName: \"kubernetes.io/projected/b23ddb59-d670-4692-a4de-df0936dd7706-kube-api-access-lss29\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.914315 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lss29\" (UniqueName: \"kubernetes.io/projected/b23ddb59-d670-4692-a4de-df0936dd7706-kube-api-access-lss29\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.914372 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-catalog-content\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.914414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-utilities\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.914926 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-utilities\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.914978 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-catalog-content\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:53 crc kubenswrapper[4787]: I0126 18:34:53.942233 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lss29\" (UniqueName: \"kubernetes.io/projected/b23ddb59-d670-4692-a4de-df0936dd7706-kube-api-access-lss29\") pod \"redhat-operators-8zwdd\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:54 crc kubenswrapper[4787]: I0126 18:34:54.034569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:34:54 crc kubenswrapper[4787]: I0126 18:34:54.287306 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8zwdd"] Jan 26 18:34:55 crc kubenswrapper[4787]: I0126 18:34:55.131613 4787 generic.go:334] "Generic (PLEG): container finished" podID="b23ddb59-d670-4692-a4de-df0936dd7706" containerID="b38afe3b8f65df45cf1d49d2c9c187d492336da0e1e75a8b8017f365fe91d3f6" exitCode=0 Jan 26 18:34:55 crc kubenswrapper[4787]: I0126 18:34:55.131693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerDied","Data":"b38afe3b8f65df45cf1d49d2c9c187d492336da0e1e75a8b8017f365fe91d3f6"} Jan 26 18:34:55 crc kubenswrapper[4787]: I0126 18:34:55.131991 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerStarted","Data":"25f8bbf0ba820680054165033cc8fbc595c930c76eae6d02bab9b9ccf5d317f7"} Jan 26 18:34:56 crc kubenswrapper[4787]: I0126 18:34:56.146068 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerStarted","Data":"f1ffa22d3fccd4acc900c2e77f99ee79c2b6a037c9cfe244ad1af2e9fa861a15"} Jan 26 18:34:57 crc kubenswrapper[4787]: I0126 18:34:57.158225 4787 generic.go:334] "Generic (PLEG): container finished" podID="b23ddb59-d670-4692-a4de-df0936dd7706" containerID="f1ffa22d3fccd4acc900c2e77f99ee79c2b6a037c9cfe244ad1af2e9fa861a15" exitCode=0 Jan 26 18:34:57 crc kubenswrapper[4787]: I0126 18:34:57.158320 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerDied","Data":"f1ffa22d3fccd4acc900c2e77f99ee79c2b6a037c9cfe244ad1af2e9fa861a15"} Jan 26 18:34:58 crc kubenswrapper[4787]: I0126 18:34:58.167847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerStarted","Data":"2d010032c6d7818c7e259b465b235ceea581c8ded9b3a4807de2236f414a9a63"} Jan 26 18:34:58 crc kubenswrapper[4787]: I0126 18:34:58.192024 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8zwdd" podStartSLOduration=2.722930218 podStartE2EDuration="5.192005857s" podCreationTimestamp="2026-01-26 18:34:53 +0000 UTC" firstStartedPulling="2026-01-26 18:34:55.133355775 +0000 UTC m=+3063.840491908" lastFinishedPulling="2026-01-26 18:34:57.602431414 +0000 UTC m=+3066.309567547" observedRunningTime="2026-01-26 18:34:58.184118225 +0000 UTC m=+3066.891254378" watchObservedRunningTime="2026-01-26 18:34:58.192005857 +0000 UTC m=+3066.899142000" Jan 26 18:35:04 crc kubenswrapper[4787]: I0126 18:35:04.035455 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:35:04 crc kubenswrapper[4787]: I0126 18:35:04.035792 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:35:05 crc kubenswrapper[4787]: I0126 18:35:05.090888 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8zwdd" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="registry-server" probeResult="failure" output=< Jan 26 18:35:05 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 18:35:05 crc kubenswrapper[4787]: > Jan 26 18:35:14 crc kubenswrapper[4787]: I0126 18:35:14.109870 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:35:14 crc kubenswrapper[4787]: I0126 18:35:14.193525 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:35:14 crc kubenswrapper[4787]: I0126 18:35:14.358432 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8zwdd"] Jan 26 18:35:15 crc kubenswrapper[4787]: I0126 18:35:15.313605 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8zwdd" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="registry-server" containerID="cri-o://2d010032c6d7818c7e259b465b235ceea581c8ded9b3a4807de2236f414a9a63" gracePeriod=2 Jan 26 18:35:16 crc kubenswrapper[4787]: I0126 18:35:16.808507 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:35:16 crc kubenswrapper[4787]: I0126 18:35:16.809041 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.341283 4787 generic.go:334] "Generic (PLEG): container finished" podID="b23ddb59-d670-4692-a4de-df0936dd7706" containerID="2d010032c6d7818c7e259b465b235ceea581c8ded9b3a4807de2236f414a9a63" exitCode=0 Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.341332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerDied","Data":"2d010032c6d7818c7e259b465b235ceea581c8ded9b3a4807de2236f414a9a63"} Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.509993 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.600884 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-utilities\") pod \"b23ddb59-d670-4692-a4de-df0936dd7706\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.600986 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-catalog-content\") pod \"b23ddb59-d670-4692-a4de-df0936dd7706\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.601071 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lss29\" (UniqueName: \"kubernetes.io/projected/b23ddb59-d670-4692-a4de-df0936dd7706-kube-api-access-lss29\") pod \"b23ddb59-d670-4692-a4de-df0936dd7706\" (UID: \"b23ddb59-d670-4692-a4de-df0936dd7706\") " Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.602073 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-utilities" (OuterVolumeSpecName: "utilities") pod "b23ddb59-d670-4692-a4de-df0936dd7706" (UID: "b23ddb59-d670-4692-a4de-df0936dd7706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.609302 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23ddb59-d670-4692-a4de-df0936dd7706-kube-api-access-lss29" (OuterVolumeSpecName: "kube-api-access-lss29") pod "b23ddb59-d670-4692-a4de-df0936dd7706" (UID: "b23ddb59-d670-4692-a4de-df0936dd7706"). InnerVolumeSpecName "kube-api-access-lss29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.702529 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lss29\" (UniqueName: \"kubernetes.io/projected/b23ddb59-d670-4692-a4de-df0936dd7706-kube-api-access-lss29\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.702578 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.749686 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b23ddb59-d670-4692-a4de-df0936dd7706" (UID: "b23ddb59-d670-4692-a4de-df0936dd7706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:35:17 crc kubenswrapper[4787]: I0126 18:35:17.804144 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23ddb59-d670-4692-a4de-df0936dd7706-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.355032 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8zwdd" event={"ID":"b23ddb59-d670-4692-a4de-df0936dd7706","Type":"ContainerDied","Data":"25f8bbf0ba820680054165033cc8fbc595c930c76eae6d02bab9b9ccf5d317f7"} Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.355121 4787 scope.go:117] "RemoveContainer" containerID="2d010032c6d7818c7e259b465b235ceea581c8ded9b3a4807de2236f414a9a63" Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.355277 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8zwdd" Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.385759 4787 scope.go:117] "RemoveContainer" containerID="f1ffa22d3fccd4acc900c2e77f99ee79c2b6a037c9cfe244ad1af2e9fa861a15" Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.415839 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8zwdd"] Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.422830 4787 scope.go:117] "RemoveContainer" containerID="b38afe3b8f65df45cf1d49d2c9c187d492336da0e1e75a8b8017f365fe91d3f6" Jan 26 18:35:18 crc kubenswrapper[4787]: I0126 18:35:18.428260 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8zwdd"] Jan 26 18:35:19 crc kubenswrapper[4787]: I0126 18:35:19.608425 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" path="/var/lib/kubelet/pods/b23ddb59-d670-4692-a4de-df0936dd7706/volumes" Jan 26 18:35:46 crc kubenswrapper[4787]: I0126 18:35:46.807901 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:35:46 crc kubenswrapper[4787]: I0126 18:35:46.808451 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:35:46 crc kubenswrapper[4787]: I0126 18:35:46.808498 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:35:46 crc kubenswrapper[4787]: I0126 18:35:46.809122 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:35:46 crc kubenswrapper[4787]: I0126 18:35:46.809173 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" gracePeriod=600 Jan 26 18:35:46 crc kubenswrapper[4787]: E0126 18:35:46.949485 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:35:47 crc kubenswrapper[4787]: I0126 18:35:47.615451 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" exitCode=0 Jan 26 18:35:47 crc kubenswrapper[4787]: I0126 18:35:47.615495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e"} Jan 26 18:35:47 crc kubenswrapper[4787]: I0126 18:35:47.615528 4787 scope.go:117] "RemoveContainer" containerID="28c493d40c5a3958008e3ffae3af8ecbbb5beb691f60cdc699115ca72ba45c30" Jan 26 18:35:47 crc kubenswrapper[4787]: I0126 18:35:47.616125 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:35:47 crc kubenswrapper[4787]: E0126 18:35:47.616392 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:36:01 crc kubenswrapper[4787]: I0126 18:36:01.598187 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:36:01 crc kubenswrapper[4787]: E0126 18:36:01.599199 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:36:15 crc kubenswrapper[4787]: I0126 18:36:15.590304 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:36:15 crc kubenswrapper[4787]: E0126 18:36:15.591314 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:36:27 crc kubenswrapper[4787]: I0126 18:36:27.588731 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:36:27 crc kubenswrapper[4787]: E0126 18:36:27.589725 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:36:41 crc kubenswrapper[4787]: I0126 18:36:41.597109 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:36:41 crc kubenswrapper[4787]: E0126 18:36:41.598068 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:36:52 crc kubenswrapper[4787]: I0126 18:36:52.589975 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:36:52 crc kubenswrapper[4787]: E0126 18:36:52.590749 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:37:07 crc kubenswrapper[4787]: I0126 18:37:07.589555 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:37:07 crc kubenswrapper[4787]: E0126 18:37:07.591211 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.766536 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gfqbj"] Jan 26 18:37:12 crc kubenswrapper[4787]: E0126 18:37:12.767234 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="extract-utilities" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.767250 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="extract-utilities" Jan 26 18:37:12 crc kubenswrapper[4787]: E0126 18:37:12.767261 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="extract-content" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.767269 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="extract-content" Jan 26 18:37:12 crc kubenswrapper[4787]: E0126 18:37:12.767290 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="registry-server" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.767299 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="registry-server" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.767463 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23ddb59-d670-4692-a4de-df0936dd7706" containerName="registry-server" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.768711 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.783158 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-catalog-content\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.783208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntkx7\" (UniqueName: \"kubernetes.io/projected/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-kube-api-access-ntkx7\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.783241 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-utilities\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.787989 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfqbj"] Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.884096 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-catalog-content\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.884138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntkx7\" (UniqueName: \"kubernetes.io/projected/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-kube-api-access-ntkx7\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.884168 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-utilities\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.884686 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-utilities\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.885216 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-catalog-content\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:12 crc kubenswrapper[4787]: I0126 18:37:12.903426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntkx7\" (UniqueName: \"kubernetes.io/projected/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-kube-api-access-ntkx7\") pod \"certified-operators-gfqbj\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:13 crc kubenswrapper[4787]: I0126 18:37:13.094737 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:13 crc kubenswrapper[4787]: I0126 18:37:13.585719 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfqbj"] Jan 26 18:37:14 crc kubenswrapper[4787]: I0126 18:37:14.332555 4787 generic.go:334] "Generic (PLEG): container finished" podID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerID="f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f" exitCode=0 Jan 26 18:37:14 crc kubenswrapper[4787]: I0126 18:37:14.332635 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfqbj" event={"ID":"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3","Type":"ContainerDied","Data":"f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f"} Jan 26 18:37:14 crc kubenswrapper[4787]: I0126 18:37:14.332666 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfqbj" event={"ID":"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3","Type":"ContainerStarted","Data":"ebe374d4c8e444753f05ca14ff3b724f1ce94da495eba92d0304577d019b18a6"} Jan 26 18:37:14 crc kubenswrapper[4787]: I0126 18:37:14.334633 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:37:15 crc kubenswrapper[4787]: I0126 18:37:15.341250 4787 generic.go:334] "Generic (PLEG): container finished" podID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerID="10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf" exitCode=0 Jan 26 18:37:15 crc kubenswrapper[4787]: I0126 18:37:15.341327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfqbj" event={"ID":"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3","Type":"ContainerDied","Data":"10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf"} Jan 26 18:37:16 crc kubenswrapper[4787]: I0126 18:37:16.351285 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfqbj" event={"ID":"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3","Type":"ContainerStarted","Data":"2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5"} Jan 26 18:37:21 crc kubenswrapper[4787]: I0126 18:37:21.597741 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:37:21 crc kubenswrapper[4787]: E0126 18:37:21.598469 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:37:23 crc kubenswrapper[4787]: I0126 18:37:23.095577 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:23 crc kubenswrapper[4787]: I0126 18:37:23.095710 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:23 crc kubenswrapper[4787]: I0126 18:37:23.141182 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:23 crc kubenswrapper[4787]: I0126 18:37:23.179600 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gfqbj" podStartSLOduration=9.626399396 podStartE2EDuration="11.179578668s" podCreationTimestamp="2026-01-26 18:37:12 +0000 UTC" firstStartedPulling="2026-01-26 18:37:14.334449651 +0000 UTC m=+3203.041585784" lastFinishedPulling="2026-01-26 18:37:15.887628923 +0000 UTC m=+3204.594765056" observedRunningTime="2026-01-26 18:37:16.374729415 +0000 UTC m=+3205.081865548" watchObservedRunningTime="2026-01-26 18:37:23.179578668 +0000 UTC m=+3211.886714801" Jan 26 18:37:23 crc kubenswrapper[4787]: I0126 18:37:23.460106 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:23 crc kubenswrapper[4787]: I0126 18:37:23.502826 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfqbj"] Jan 26 18:37:25 crc kubenswrapper[4787]: I0126 18:37:25.414710 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gfqbj" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="registry-server" containerID="cri-o://2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5" gracePeriod=2 Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.017911 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.188042 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntkx7\" (UniqueName: \"kubernetes.io/projected/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-kube-api-access-ntkx7\") pod \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.188105 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-catalog-content\") pod \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.188149 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-utilities\") pod \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\" (UID: \"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3\") " Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.189281 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-utilities" (OuterVolumeSpecName: "utilities") pod "c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" (UID: "c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.189388 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.197544 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-kube-api-access-ntkx7" (OuterVolumeSpecName: "kube-api-access-ntkx7") pod "c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" (UID: "c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3"). InnerVolumeSpecName "kube-api-access-ntkx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.252040 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" (UID: "c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.290453 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.290499 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntkx7\" (UniqueName: \"kubernetes.io/projected/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3-kube-api-access-ntkx7\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.433589 4787 generic.go:334] "Generic (PLEG): container finished" podID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerID="2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5" exitCode=0 Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.433687 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfqbj" event={"ID":"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3","Type":"ContainerDied","Data":"2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5"} Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.434017 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfqbj" event={"ID":"c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3","Type":"ContainerDied","Data":"ebe374d4c8e444753f05ca14ff3b724f1ce94da495eba92d0304577d019b18a6"} Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.433706 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfqbj" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.434083 4787 scope.go:117] "RemoveContainer" containerID="2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.461380 4787 scope.go:117] "RemoveContainer" containerID="10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.475360 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfqbj"] Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.498260 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gfqbj"] Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.506382 4787 scope.go:117] "RemoveContainer" containerID="f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.532352 4787 scope.go:117] "RemoveContainer" containerID="2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5" Jan 26 18:37:27 crc kubenswrapper[4787]: E0126 18:37:27.532882 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5\": container with ID starting with 2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5 not found: ID does not exist" containerID="2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.532922 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5"} err="failed to get container status \"2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5\": rpc error: code = NotFound desc = could not find container \"2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5\": container with ID starting with 2b350ef2d4738fca1656590fe5ec63da8e5085da08a3ad5efaa5ed1c4a65caf5 not found: ID does not exist" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.533079 4787 scope.go:117] "RemoveContainer" containerID="10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf" Jan 26 18:37:27 crc kubenswrapper[4787]: E0126 18:37:27.533480 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf\": container with ID starting with 10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf not found: ID does not exist" containerID="10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.533546 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf"} err="failed to get container status \"10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf\": rpc error: code = NotFound desc = could not find container \"10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf\": container with ID starting with 10edadeab041eceb4f5a4ff6b59a5da1cc9b212a3075e171cdc28b3cb1af1ddf not found: ID does not exist" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.533596 4787 scope.go:117] "RemoveContainer" containerID="f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f" Jan 26 18:37:27 crc kubenswrapper[4787]: E0126 18:37:27.534078 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f\": container with ID starting with f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f not found: ID does not exist" containerID="f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.534104 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f"} err="failed to get container status \"f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f\": rpc error: code = NotFound desc = could not find container \"f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f\": container with ID starting with f31d944696c6f3ff934832c0e25b91116cad3ad6fca9b8fbd9cc449852b4724f not found: ID does not exist" Jan 26 18:37:27 crc kubenswrapper[4787]: I0126 18:37:27.600347 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" path="/var/lib/kubelet/pods/c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3/volumes" Jan 26 18:37:33 crc kubenswrapper[4787]: I0126 18:37:33.589370 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:37:33 crc kubenswrapper[4787]: E0126 18:37:33.590346 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:37:48 crc kubenswrapper[4787]: I0126 18:37:48.590238 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:37:48 crc kubenswrapper[4787]: E0126 18:37:48.591589 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:38:03 crc kubenswrapper[4787]: I0126 18:38:03.589517 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:38:03 crc kubenswrapper[4787]: E0126 18:38:03.590649 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:38:14 crc kubenswrapper[4787]: I0126 18:38:14.588911 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:38:14 crc kubenswrapper[4787]: E0126 18:38:14.591771 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:38:28 crc kubenswrapper[4787]: I0126 18:38:28.591406 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:38:28 crc kubenswrapper[4787]: E0126 18:38:28.592389 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:38:42 crc kubenswrapper[4787]: I0126 18:38:42.591196 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:38:42 crc kubenswrapper[4787]: E0126 18:38:42.592496 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:38:53 crc kubenswrapper[4787]: I0126 18:38:53.590245 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:38:53 crc kubenswrapper[4787]: E0126 18:38:53.591445 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:39:08 crc kubenswrapper[4787]: I0126 18:39:08.589910 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:39:08 crc kubenswrapper[4787]: E0126 18:39:08.591268 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:39:23 crc kubenswrapper[4787]: I0126 18:39:23.589089 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:39:23 crc kubenswrapper[4787]: E0126 18:39:23.589667 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:39:34 crc kubenswrapper[4787]: I0126 18:39:34.589170 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:39:34 crc kubenswrapper[4787]: E0126 18:39:34.589926 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:39:47 crc kubenswrapper[4787]: I0126 18:39:47.589976 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:39:47 crc kubenswrapper[4787]: E0126 18:39:47.591378 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:40:00 crc kubenswrapper[4787]: I0126 18:40:00.589589 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:40:00 crc kubenswrapper[4787]: E0126 18:40:00.590405 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:40:15 crc kubenswrapper[4787]: I0126 18:40:15.589475 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:40:15 crc kubenswrapper[4787]: E0126 18:40:15.590052 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:40:30 crc kubenswrapper[4787]: I0126 18:40:30.589982 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:40:30 crc kubenswrapper[4787]: E0126 18:40:30.591233 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:40:45 crc kubenswrapper[4787]: I0126 18:40:45.590101 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:40:45 crc kubenswrapper[4787]: E0126 18:40:45.591085 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:41:00 crc kubenswrapper[4787]: I0126 18:41:00.589536 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:41:01 crc kubenswrapper[4787]: I0126 18:41:01.146507 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"3ddc4248cd4501727da5a4fabfaa920c7db9520cbfff045b4f4ad2c76701bd01"} Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.460887 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z6l6n"] Jan 26 18:41:44 crc kubenswrapper[4787]: E0126 18:41:44.461701 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="extract-content" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.461716 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="extract-content" Jan 26 18:41:44 crc kubenswrapper[4787]: E0126 18:41:44.461733 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="registry-server" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.461741 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="registry-server" Jan 26 18:41:44 crc kubenswrapper[4787]: E0126 18:41:44.461774 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="extract-utilities" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.461782 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="extract-utilities" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.461938 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f9f836-bc5e-4e82-9e4f-ab147dee9ed3" containerName="registry-server" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.463127 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.477499 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6l6n"] Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.497860 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-utilities\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.497975 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xk2\" (UniqueName: \"kubernetes.io/projected/be35b6fa-069e-44b0-8e01-ee3a8c84348e-kube-api-access-48xk2\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.498079 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-catalog-content\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.599740 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xk2\" (UniqueName: \"kubernetes.io/projected/be35b6fa-069e-44b0-8e01-ee3a8c84348e-kube-api-access-48xk2\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.599880 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-catalog-content\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.600017 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-utilities\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.600747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-catalog-content\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.600803 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-utilities\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.634490 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xk2\" (UniqueName: \"kubernetes.io/projected/be35b6fa-069e-44b0-8e01-ee3a8c84348e-kube-api-access-48xk2\") pod \"redhat-marketplace-z6l6n\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:44 crc kubenswrapper[4787]: I0126 18:41:44.798205 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:45 crc kubenswrapper[4787]: I0126 18:41:45.254251 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6l6n"] Jan 26 18:41:45 crc kubenswrapper[4787]: I0126 18:41:45.577739 4787 generic.go:334] "Generic (PLEG): container finished" podID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerID="421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322" exitCode=0 Jan 26 18:41:45 crc kubenswrapper[4787]: I0126 18:41:45.577871 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerDied","Data":"421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322"} Jan 26 18:41:45 crc kubenswrapper[4787]: I0126 18:41:45.578078 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerStarted","Data":"b6b071972274c5d52b4e29ea82d942abfa6ac9e3b0046fec0f354f5f60592ac9"} Jan 26 18:41:46 crc kubenswrapper[4787]: I0126 18:41:46.588115 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerStarted","Data":"c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f"} Jan 26 18:41:46 crc kubenswrapper[4787]: I0126 18:41:46.869117 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2q989"] Jan 26 18:41:46 crc kubenswrapper[4787]: I0126 18:41:46.875784 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:46 crc kubenswrapper[4787]: I0126 18:41:46.881473 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q989"] Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.032703 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksms5\" (UniqueName: \"kubernetes.io/projected/ab9460e6-abb8-42f4-985d-e43131ee4675-kube-api-access-ksms5\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.033161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-catalog-content\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.033247 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-utilities\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.135344 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-catalog-content\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.135434 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-utilities\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.135563 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksms5\" (UniqueName: \"kubernetes.io/projected/ab9460e6-abb8-42f4-985d-e43131ee4675-kube-api-access-ksms5\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.135882 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-catalog-content\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.136548 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-utilities\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.162426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksms5\" (UniqueName: \"kubernetes.io/projected/ab9460e6-abb8-42f4-985d-e43131ee4675-kube-api-access-ksms5\") pod \"community-operators-2q989\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.233325 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.597990 4787 generic.go:334] "Generic (PLEG): container finished" podID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerID="c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f" exitCode=0 Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.598224 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerDied","Data":"c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f"} Jan 26 18:41:47 crc kubenswrapper[4787]: I0126 18:41:47.720261 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q989"] Jan 26 18:41:48 crc kubenswrapper[4787]: I0126 18:41:48.608668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerStarted","Data":"0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b"} Jan 26 18:41:48 crc kubenswrapper[4787]: I0126 18:41:48.611816 4787 generic.go:334] "Generic (PLEG): container finished" podID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerID="1eeb69abd0d851afc03c2964e8584ca658b2951d6385eb3b52c0200d9f4be90a" exitCode=0 Jan 26 18:41:48 crc kubenswrapper[4787]: I0126 18:41:48.611865 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerDied","Data":"1eeb69abd0d851afc03c2964e8584ca658b2951d6385eb3b52c0200d9f4be90a"} Jan 26 18:41:48 crc kubenswrapper[4787]: I0126 18:41:48.611930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerStarted","Data":"d536c7faa0767e801be27154ed1f7b37719f6813fff3cba8878da5d9ad087856"} Jan 26 18:41:48 crc kubenswrapper[4787]: I0126 18:41:48.636739 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z6l6n" podStartSLOduration=2.158046617 podStartE2EDuration="4.636712023s" podCreationTimestamp="2026-01-26 18:41:44 +0000 UTC" firstStartedPulling="2026-01-26 18:41:45.581366469 +0000 UTC m=+3474.288502652" lastFinishedPulling="2026-01-26 18:41:48.060031925 +0000 UTC m=+3476.767168058" observedRunningTime="2026-01-26 18:41:48.632414088 +0000 UTC m=+3477.339550221" watchObservedRunningTime="2026-01-26 18:41:48.636712023 +0000 UTC m=+3477.343848186" Jan 26 18:41:49 crc kubenswrapper[4787]: I0126 18:41:49.630363 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerStarted","Data":"cd6f9b0b0470721c82c5ecff4717c60c89a9eae6ecc145c0c469fdc9fa4728cd"} Jan 26 18:41:50 crc kubenswrapper[4787]: I0126 18:41:50.642638 4787 generic.go:334] "Generic (PLEG): container finished" podID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerID="cd6f9b0b0470721c82c5ecff4717c60c89a9eae6ecc145c0c469fdc9fa4728cd" exitCode=0 Jan 26 18:41:50 crc kubenswrapper[4787]: I0126 18:41:50.642696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerDied","Data":"cd6f9b0b0470721c82c5ecff4717c60c89a9eae6ecc145c0c469fdc9fa4728cd"} Jan 26 18:41:51 crc kubenswrapper[4787]: I0126 18:41:51.653202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerStarted","Data":"134a1db24cef098806d8c4cf70aad616c205296c996f83de097e11304a6d6f16"} Jan 26 18:41:51 crc kubenswrapper[4787]: I0126 18:41:51.674635 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2q989" podStartSLOduration=3.179738377 podStartE2EDuration="5.67460683s" podCreationTimestamp="2026-01-26 18:41:46 +0000 UTC" firstStartedPulling="2026-01-26 18:41:48.613938515 +0000 UTC m=+3477.321074648" lastFinishedPulling="2026-01-26 18:41:51.108806968 +0000 UTC m=+3479.815943101" observedRunningTime="2026-01-26 18:41:51.673441982 +0000 UTC m=+3480.380578135" watchObservedRunningTime="2026-01-26 18:41:51.67460683 +0000 UTC m=+3480.381742963" Jan 26 18:41:54 crc kubenswrapper[4787]: I0126 18:41:54.798879 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:54 crc kubenswrapper[4787]: I0126 18:41:54.799319 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:54 crc kubenswrapper[4787]: I0126 18:41:54.852934 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:55 crc kubenswrapper[4787]: I0126 18:41:55.737382 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:57 crc kubenswrapper[4787]: I0126 18:41:57.234197 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:57 crc kubenswrapper[4787]: I0126 18:41:57.234675 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:57 crc kubenswrapper[4787]: I0126 18:41:57.290386 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:57 crc kubenswrapper[4787]: I0126 18:41:57.646510 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6l6n"] Jan 26 18:41:57 crc kubenswrapper[4787]: I0126 18:41:57.700980 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z6l6n" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="registry-server" containerID="cri-o://0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b" gracePeriod=2 Jan 26 18:41:57 crc kubenswrapper[4787]: I0126 18:41:57.764612 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.586227 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.712052 4787 generic.go:334] "Generic (PLEG): container finished" podID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerID="0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b" exitCode=0 Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.712813 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6l6n" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.713140 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerDied","Data":"0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b"} Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.713303 4787 scope.go:117] "RemoveContainer" containerID="0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.713231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6l6n" event={"ID":"be35b6fa-069e-44b0-8e01-ee3a8c84348e","Type":"ContainerDied","Data":"b6b071972274c5d52b4e29ea82d942abfa6ac9e3b0046fec0f354f5f60592ac9"} Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.715857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48xk2\" (UniqueName: \"kubernetes.io/projected/be35b6fa-069e-44b0-8e01-ee3a8c84348e-kube-api-access-48xk2\") pod \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.715976 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-catalog-content\") pod \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.715997 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-utilities\") pod \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\" (UID: \"be35b6fa-069e-44b0-8e01-ee3a8c84348e\") " Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.717459 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-utilities" (OuterVolumeSpecName: "utilities") pod "be35b6fa-069e-44b0-8e01-ee3a8c84348e" (UID: "be35b6fa-069e-44b0-8e01-ee3a8c84348e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.722323 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be35b6fa-069e-44b0-8e01-ee3a8c84348e-kube-api-access-48xk2" (OuterVolumeSpecName: "kube-api-access-48xk2") pod "be35b6fa-069e-44b0-8e01-ee3a8c84348e" (UID: "be35b6fa-069e-44b0-8e01-ee3a8c84348e"). InnerVolumeSpecName "kube-api-access-48xk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.747760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be35b6fa-069e-44b0-8e01-ee3a8c84348e" (UID: "be35b6fa-069e-44b0-8e01-ee3a8c84348e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.769298 4787 scope.go:117] "RemoveContainer" containerID="c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.798135 4787 scope.go:117] "RemoveContainer" containerID="421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.817213 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48xk2\" (UniqueName: \"kubernetes.io/projected/be35b6fa-069e-44b0-8e01-ee3a8c84348e-kube-api-access-48xk2\") on node \"crc\" DevicePath \"\"" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.817244 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.817257 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be35b6fa-069e-44b0-8e01-ee3a8c84348e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.819958 4787 scope.go:117] "RemoveContainer" containerID="0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b" Jan 26 18:41:58 crc kubenswrapper[4787]: E0126 18:41:58.820462 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b\": container with ID starting with 0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b not found: ID does not exist" containerID="0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.820495 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b"} err="failed to get container status \"0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b\": rpc error: code = NotFound desc = could not find container \"0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b\": container with ID starting with 0d9e88300092ecc222e6dc4f4c8f739d2cf19bbf0873b039224249473e55129b not found: ID does not exist" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.820550 4787 scope.go:117] "RemoveContainer" containerID="c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f" Jan 26 18:41:58 crc kubenswrapper[4787]: E0126 18:41:58.821058 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f\": container with ID starting with c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f not found: ID does not exist" containerID="c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.821104 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f"} err="failed to get container status \"c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f\": rpc error: code = NotFound desc = could not find container \"c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f\": container with ID starting with c1c3a2c1c7350270e97e2e6e94731f4cb516fee05ac08f8d8f99beff5fa5686f not found: ID does not exist" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.821135 4787 scope.go:117] "RemoveContainer" containerID="421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322" Jan 26 18:41:58 crc kubenswrapper[4787]: E0126 18:41:58.821700 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322\": container with ID starting with 421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322 not found: ID does not exist" containerID="421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322" Jan 26 18:41:58 crc kubenswrapper[4787]: I0126 18:41:58.821726 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322"} err="failed to get container status \"421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322\": rpc error: code = NotFound desc = could not find container \"421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322\": container with ID starting with 421460c4e11d3e6876e76104d6e5192fb894606780f32bfb335896aeecf9b322 not found: ID does not exist" Jan 26 18:41:59 crc kubenswrapper[4787]: I0126 18:41:59.051562 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6l6n"] Jan 26 18:41:59 crc kubenswrapper[4787]: I0126 18:41:59.056741 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6l6n"] Jan 26 18:41:59 crc kubenswrapper[4787]: I0126 18:41:59.608810 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" path="/var/lib/kubelet/pods/be35b6fa-069e-44b0-8e01-ee3a8c84348e/volumes" Jan 26 18:42:02 crc kubenswrapper[4787]: I0126 18:42:02.049085 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q989"] Jan 26 18:42:02 crc kubenswrapper[4787]: I0126 18:42:02.049473 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2q989" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="registry-server" containerID="cri-o://134a1db24cef098806d8c4cf70aad616c205296c996f83de097e11304a6d6f16" gracePeriod=2 Jan 26 18:42:02 crc kubenswrapper[4787]: E0126 18:42:02.219433 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9460e6_abb8_42f4_985d_e43131ee4675.slice/crio-134a1db24cef098806d8c4cf70aad616c205296c996f83de097e11304a6d6f16.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:42:02 crc kubenswrapper[4787]: I0126 18:42:02.747341 4787 generic.go:334] "Generic (PLEG): container finished" podID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerID="134a1db24cef098806d8c4cf70aad616c205296c996f83de097e11304a6d6f16" exitCode=0 Jan 26 18:42:02 crc kubenswrapper[4787]: I0126 18:42:02.747427 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerDied","Data":"134a1db24cef098806d8c4cf70aad616c205296c996f83de097e11304a6d6f16"} Jan 26 18:42:02 crc kubenswrapper[4787]: I0126 18:42:02.990513 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.182717 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-utilities\") pod \"ab9460e6-abb8-42f4-985d-e43131ee4675\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.182882 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksms5\" (UniqueName: \"kubernetes.io/projected/ab9460e6-abb8-42f4-985d-e43131ee4675-kube-api-access-ksms5\") pod \"ab9460e6-abb8-42f4-985d-e43131ee4675\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.182935 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-catalog-content\") pod \"ab9460e6-abb8-42f4-985d-e43131ee4675\" (UID: \"ab9460e6-abb8-42f4-985d-e43131ee4675\") " Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.184873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-utilities" (OuterVolumeSpecName: "utilities") pod "ab9460e6-abb8-42f4-985d-e43131ee4675" (UID: "ab9460e6-abb8-42f4-985d-e43131ee4675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.191663 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9460e6-abb8-42f4-985d-e43131ee4675-kube-api-access-ksms5" (OuterVolumeSpecName: "kube-api-access-ksms5") pod "ab9460e6-abb8-42f4-985d-e43131ee4675" (UID: "ab9460e6-abb8-42f4-985d-e43131ee4675"). InnerVolumeSpecName "kube-api-access-ksms5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.235750 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab9460e6-abb8-42f4-985d-e43131ee4675" (UID: "ab9460e6-abb8-42f4-985d-e43131ee4675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.284527 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.284578 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9460e6-abb8-42f4-985d-e43131ee4675-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.284598 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksms5\" (UniqueName: \"kubernetes.io/projected/ab9460e6-abb8-42f4-985d-e43131ee4675-kube-api-access-ksms5\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.756872 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q989" event={"ID":"ab9460e6-abb8-42f4-985d-e43131ee4675","Type":"ContainerDied","Data":"d536c7faa0767e801be27154ed1f7b37719f6813fff3cba8878da5d9ad087856"} Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.756919 4787 scope.go:117] "RemoveContainer" containerID="134a1db24cef098806d8c4cf70aad616c205296c996f83de097e11304a6d6f16" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.756975 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q989" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.783658 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q989"] Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.789306 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2q989"] Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.790273 4787 scope.go:117] "RemoveContainer" containerID="cd6f9b0b0470721c82c5ecff4717c60c89a9eae6ecc145c0c469fdc9fa4728cd" Jan 26 18:42:03 crc kubenswrapper[4787]: I0126 18:42:03.818123 4787 scope.go:117] "RemoveContainer" containerID="1eeb69abd0d851afc03c2964e8584ca658b2951d6385eb3b52c0200d9f4be90a" Jan 26 18:42:05 crc kubenswrapper[4787]: I0126 18:42:05.606856 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" path="/var/lib/kubelet/pods/ab9460e6-abb8-42f4-985d-e43131ee4675/volumes" Jan 26 18:43:16 crc kubenswrapper[4787]: I0126 18:43:16.808456 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:43:16 crc kubenswrapper[4787]: I0126 18:43:16.809168 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:43:46 crc kubenswrapper[4787]: I0126 18:43:46.807721 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:43:46 crc kubenswrapper[4787]: I0126 18:43:46.808330 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:44:16 crc kubenswrapper[4787]: I0126 18:44:16.808214 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:44:16 crc kubenswrapper[4787]: I0126 18:44:16.808714 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:44:16 crc kubenswrapper[4787]: I0126 18:44:16.808795 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:44:16 crc kubenswrapper[4787]: I0126 18:44:16.809541 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ddc4248cd4501727da5a4fabfaa920c7db9520cbfff045b4f4ad2c76701bd01"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:44:16 crc kubenswrapper[4787]: I0126 18:44:16.809614 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://3ddc4248cd4501727da5a4fabfaa920c7db9520cbfff045b4f4ad2c76701bd01" gracePeriod=600 Jan 26 18:44:17 crc kubenswrapper[4787]: I0126 18:44:17.791042 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="3ddc4248cd4501727da5a4fabfaa920c7db9520cbfff045b4f4ad2c76701bd01" exitCode=0 Jan 26 18:44:17 crc kubenswrapper[4787]: I0126 18:44:17.791075 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"3ddc4248cd4501727da5a4fabfaa920c7db9520cbfff045b4f4ad2c76701bd01"} Jan 26 18:44:17 crc kubenswrapper[4787]: I0126 18:44:17.791605 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539"} Jan 26 18:44:17 crc kubenswrapper[4787]: I0126 18:44:17.791626 4787 scope.go:117] "RemoveContainer" containerID="89a3a883705026e8f1d3b377d2c52608071e1cac5f63ff7fc425699bac9ddf4e" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.185507 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj"] Jan 26 18:45:00 crc kubenswrapper[4787]: E0126 18:45:00.186342 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="extract-utilities" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186359 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="extract-utilities" Jan 26 18:45:00 crc kubenswrapper[4787]: E0126 18:45:00.186373 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="extract-content" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186381 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="extract-content" Jan 26 18:45:00 crc kubenswrapper[4787]: E0126 18:45:00.186393 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="extract-content" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186401 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="extract-content" Jan 26 18:45:00 crc kubenswrapper[4787]: E0126 18:45:00.186414 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="registry-server" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186421 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="registry-server" Jan 26 18:45:00 crc kubenswrapper[4787]: E0126 18:45:00.186433 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="extract-utilities" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186443 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="extract-utilities" Jan 26 18:45:00 crc kubenswrapper[4787]: E0126 18:45:00.186468 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="registry-server" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186476 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="registry-server" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186842 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="be35b6fa-069e-44b0-8e01-ee3a8c84348e" containerName="registry-server" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.186866 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab9460e6-abb8-42f4-985d-e43131ee4675" containerName="registry-server" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.187979 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.190822 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.191098 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.194531 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj"] Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.252372 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f717fe1-1edc-4072-8271-4116ac22a6df-secret-volume\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.252558 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5sh\" (UniqueName: \"kubernetes.io/projected/0f717fe1-1edc-4072-8271-4116ac22a6df-kube-api-access-6x5sh\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.252625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f717fe1-1edc-4072-8271-4116ac22a6df-config-volume\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.354338 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f717fe1-1edc-4072-8271-4116ac22a6df-config-volume\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.354529 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f717fe1-1edc-4072-8271-4116ac22a6df-secret-volume\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.354604 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5sh\" (UniqueName: \"kubernetes.io/projected/0f717fe1-1edc-4072-8271-4116ac22a6df-kube-api-access-6x5sh\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.355585 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f717fe1-1edc-4072-8271-4116ac22a6df-config-volume\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.361607 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f717fe1-1edc-4072-8271-4116ac22a6df-secret-volume\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.376598 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5sh\" (UniqueName: \"kubernetes.io/projected/0f717fe1-1edc-4072-8271-4116ac22a6df-kube-api-access-6x5sh\") pod \"collect-profiles-29490885-4j7kj\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.511247 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:00 crc kubenswrapper[4787]: I0126 18:45:00.921701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj"] Jan 26 18:45:01 crc kubenswrapper[4787]: I0126 18:45:01.119069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" event={"ID":"0f717fe1-1edc-4072-8271-4116ac22a6df","Type":"ContainerStarted","Data":"50c1a40a4f70cd4ab6e5188ae60bcdef27b508eeb72d96a34b4b181600ceef08"} Jan 26 18:45:01 crc kubenswrapper[4787]: I0126 18:45:01.119434 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" event={"ID":"0f717fe1-1edc-4072-8271-4116ac22a6df","Type":"ContainerStarted","Data":"cb96460e4ec411c2bc8902eba444d7c68fbfdd0db1e111863a1bbb27a2aa9b98"} Jan 26 18:45:01 crc kubenswrapper[4787]: I0126 18:45:01.139850 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" podStartSLOduration=1.139830541 podStartE2EDuration="1.139830541s" podCreationTimestamp="2026-01-26 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:45:01.133776782 +0000 UTC m=+3669.840912935" watchObservedRunningTime="2026-01-26 18:45:01.139830541 +0000 UTC m=+3669.846966674" Jan 26 18:45:02 crc kubenswrapper[4787]: I0126 18:45:02.128062 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f717fe1-1edc-4072-8271-4116ac22a6df" containerID="50c1a40a4f70cd4ab6e5188ae60bcdef27b508eeb72d96a34b4b181600ceef08" exitCode=0 Jan 26 18:45:02 crc kubenswrapper[4787]: I0126 18:45:02.128121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" event={"ID":"0f717fe1-1edc-4072-8271-4116ac22a6df","Type":"ContainerDied","Data":"50c1a40a4f70cd4ab6e5188ae60bcdef27b508eeb72d96a34b4b181600ceef08"} Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.411418 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.595262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f717fe1-1edc-4072-8271-4116ac22a6df-secret-volume\") pod \"0f717fe1-1edc-4072-8271-4116ac22a6df\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.595457 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f717fe1-1edc-4072-8271-4116ac22a6df-config-volume\") pod \"0f717fe1-1edc-4072-8271-4116ac22a6df\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.595527 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x5sh\" (UniqueName: \"kubernetes.io/projected/0f717fe1-1edc-4072-8271-4116ac22a6df-kube-api-access-6x5sh\") pod \"0f717fe1-1edc-4072-8271-4116ac22a6df\" (UID: \"0f717fe1-1edc-4072-8271-4116ac22a6df\") " Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.596194 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f717fe1-1edc-4072-8271-4116ac22a6df-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f717fe1-1edc-4072-8271-4116ac22a6df" (UID: "0f717fe1-1edc-4072-8271-4116ac22a6df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.596518 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f717fe1-1edc-4072-8271-4116ac22a6df-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.602148 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f717fe1-1edc-4072-8271-4116ac22a6df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f717fe1-1edc-4072-8271-4116ac22a6df" (UID: "0f717fe1-1edc-4072-8271-4116ac22a6df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.602684 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f717fe1-1edc-4072-8271-4116ac22a6df-kube-api-access-6x5sh" (OuterVolumeSpecName: "kube-api-access-6x5sh") pod "0f717fe1-1edc-4072-8271-4116ac22a6df" (UID: "0f717fe1-1edc-4072-8271-4116ac22a6df"). InnerVolumeSpecName "kube-api-access-6x5sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.697453 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x5sh\" (UniqueName: \"kubernetes.io/projected/0f717fe1-1edc-4072-8271-4116ac22a6df-kube-api-access-6x5sh\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:03 crc kubenswrapper[4787]: I0126 18:45:03.697493 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f717fe1-1edc-4072-8271-4116ac22a6df-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:04 crc kubenswrapper[4787]: I0126 18:45:04.144737 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" event={"ID":"0f717fe1-1edc-4072-8271-4116ac22a6df","Type":"ContainerDied","Data":"cb96460e4ec411c2bc8902eba444d7c68fbfdd0db1e111863a1bbb27a2aa9b98"} Jan 26 18:45:04 crc kubenswrapper[4787]: I0126 18:45:04.145040 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb96460e4ec411c2bc8902eba444d7c68fbfdd0db1e111863a1bbb27a2aa9b98" Jan 26 18:45:04 crc kubenswrapper[4787]: I0126 18:45:04.144810 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj" Jan 26 18:45:04 crc kubenswrapper[4787]: I0126 18:45:04.493288 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4"] Jan 26 18:45:04 crc kubenswrapper[4787]: I0126 18:45:04.504412 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490840-qqbk4"] Jan 26 18:45:05 crc kubenswrapper[4787]: I0126 18:45:05.599471 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c4cd17-321a-453c-8d73-d19dcb3695ac" path="/var/lib/kubelet/pods/08c4cd17-321a-453c-8d73-d19dcb3695ac/volumes" Jan 26 18:45:57 crc kubenswrapper[4787]: I0126 18:45:57.554938 4787 scope.go:117] "RemoveContainer" containerID="9c5090c840ffa203df1a3828d548fd0ca7079c308e055bd1609a0b68786e5c07" Jan 26 18:46:46 crc kubenswrapper[4787]: I0126 18:46:46.808012 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:46:46 crc kubenswrapper[4787]: I0126 18:46:46.808611 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:47:16 crc kubenswrapper[4787]: I0126 18:47:16.808375 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:47:16 crc kubenswrapper[4787]: I0126 18:47:16.809039 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:47:46 crc kubenswrapper[4787]: I0126 18:47:46.808165 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:47:46 crc kubenswrapper[4787]: I0126 18:47:46.808833 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:47:46 crc kubenswrapper[4787]: I0126 18:47:46.808932 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:47:46 crc kubenswrapper[4787]: I0126 18:47:46.810068 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:47:46 crc kubenswrapper[4787]: I0126 18:47:46.810191 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" gracePeriod=600 Jan 26 18:47:46 crc kubenswrapper[4787]: E0126 18:47:46.938125 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:47:47 crc kubenswrapper[4787]: I0126 18:47:47.438989 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" exitCode=0 Jan 26 18:47:47 crc kubenswrapper[4787]: I0126 18:47:47.439055 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539"} Jan 26 18:47:47 crc kubenswrapper[4787]: I0126 18:47:47.439139 4787 scope.go:117] "RemoveContainer" containerID="3ddc4248cd4501727da5a4fabfaa920c7db9520cbfff045b4f4ad2c76701bd01" Jan 26 18:47:47 crc kubenswrapper[4787]: I0126 18:47:47.439773 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:47:47 crc kubenswrapper[4787]: E0126 18:47:47.440162 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:48:01 crc kubenswrapper[4787]: I0126 18:48:01.599481 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:48:01 crc kubenswrapper[4787]: E0126 18:48:01.600361 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:48:16 crc kubenswrapper[4787]: I0126 18:48:16.589748 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:48:16 crc kubenswrapper[4787]: E0126 18:48:16.590667 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:48:31 crc kubenswrapper[4787]: I0126 18:48:31.596028 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:48:31 crc kubenswrapper[4787]: E0126 18:48:31.596851 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:48:36 crc kubenswrapper[4787]: I0126 18:48:36.933445 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rb8kn"] Jan 26 18:48:36 crc kubenswrapper[4787]: E0126 18:48:36.934213 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f717fe1-1edc-4072-8271-4116ac22a6df" containerName="collect-profiles" Jan 26 18:48:36 crc kubenswrapper[4787]: I0126 18:48:36.934236 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f717fe1-1edc-4072-8271-4116ac22a6df" containerName="collect-profiles" Jan 26 18:48:36 crc kubenswrapper[4787]: I0126 18:48:36.934436 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f717fe1-1edc-4072-8271-4116ac22a6df" containerName="collect-profiles" Jan 26 18:48:36 crc kubenswrapper[4787]: I0126 18:48:36.935826 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:36 crc kubenswrapper[4787]: I0126 18:48:36.946632 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rb8kn"] Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.045031 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-utilities\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.045187 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2mgb\" (UniqueName: \"kubernetes.io/projected/0a7195cb-4bd9-4c89-a82a-608622233f6f-kube-api-access-j2mgb\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.045234 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-catalog-content\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.146962 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2mgb\" (UniqueName: \"kubernetes.io/projected/0a7195cb-4bd9-4c89-a82a-608622233f6f-kube-api-access-j2mgb\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.147327 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-catalog-content\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.147473 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-utilities\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.148220 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-catalog-content\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.148383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-utilities\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.167598 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2mgb\" (UniqueName: \"kubernetes.io/projected/0a7195cb-4bd9-4c89-a82a-608622233f6f-kube-api-access-j2mgb\") pod \"certified-operators-rb8kn\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.256099 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.743061 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rb8kn"] Jan 26 18:48:37 crc kubenswrapper[4787]: W0126 18:48:37.747605 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a7195cb_4bd9_4c89_a82a_608622233f6f.slice/crio-5bce760f54fcd6a01c0882566fdacf86070fca0047899557563f44be0f6121ea WatchSource:0}: Error finding container 5bce760f54fcd6a01c0882566fdacf86070fca0047899557563f44be0f6121ea: Status 404 returned error can't find the container with id 5bce760f54fcd6a01c0882566fdacf86070fca0047899557563f44be0f6121ea Jan 26 18:48:37 crc kubenswrapper[4787]: I0126 18:48:37.787648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb8kn" event={"ID":"0a7195cb-4bd9-4c89-a82a-608622233f6f","Type":"ContainerStarted","Data":"5bce760f54fcd6a01c0882566fdacf86070fca0047899557563f44be0f6121ea"} Jan 26 18:48:38 crc kubenswrapper[4787]: I0126 18:48:38.796688 4787 generic.go:334] "Generic (PLEG): container finished" podID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerID="3f007570945f59a3c32bf7374d97ec7954cc8124612c549d9077d51d5f34d0d7" exitCode=0 Jan 26 18:48:38 crc kubenswrapper[4787]: I0126 18:48:38.797010 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb8kn" event={"ID":"0a7195cb-4bd9-4c89-a82a-608622233f6f","Type":"ContainerDied","Data":"3f007570945f59a3c32bf7374d97ec7954cc8124612c549d9077d51d5f34d0d7"} Jan 26 18:48:38 crc kubenswrapper[4787]: I0126 18:48:38.799694 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:48:40 crc kubenswrapper[4787]: I0126 18:48:40.818147 4787 generic.go:334] "Generic (PLEG): container finished" podID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerID="9be22c02ba098039aed52f373268ea05b2df0c2f802fe7071d8285f2c56f8c45" exitCode=0 Jan 26 18:48:40 crc kubenswrapper[4787]: I0126 18:48:40.818264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb8kn" event={"ID":"0a7195cb-4bd9-4c89-a82a-608622233f6f","Type":"ContainerDied","Data":"9be22c02ba098039aed52f373268ea05b2df0c2f802fe7071d8285f2c56f8c45"} Jan 26 18:48:41 crc kubenswrapper[4787]: I0126 18:48:41.844186 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb8kn" event={"ID":"0a7195cb-4bd9-4c89-a82a-608622233f6f","Type":"ContainerStarted","Data":"0a82065f41d60c4e1b9bca3fddb2ba108bce0c309f4da8432ce274066c24c85b"} Jan 26 18:48:41 crc kubenswrapper[4787]: I0126 18:48:41.864996 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rb8kn" podStartSLOduration=3.402440485 podStartE2EDuration="5.864974228s" podCreationTimestamp="2026-01-26 18:48:36 +0000 UTC" firstStartedPulling="2026-01-26 18:48:38.799251603 +0000 UTC m=+3887.506387746" lastFinishedPulling="2026-01-26 18:48:41.261785336 +0000 UTC m=+3889.968921489" observedRunningTime="2026-01-26 18:48:41.863181114 +0000 UTC m=+3890.570317257" watchObservedRunningTime="2026-01-26 18:48:41.864974228 +0000 UTC m=+3890.572110361" Jan 26 18:48:45 crc kubenswrapper[4787]: I0126 18:48:45.589156 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:48:45 crc kubenswrapper[4787]: E0126 18:48:45.590398 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:48:47 crc kubenswrapper[4787]: I0126 18:48:47.257231 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:47 crc kubenswrapper[4787]: I0126 18:48:47.257298 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:47 crc kubenswrapper[4787]: I0126 18:48:47.357494 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:47 crc kubenswrapper[4787]: I0126 18:48:47.929446 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:47 crc kubenswrapper[4787]: I0126 18:48:47.984677 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rb8kn"] Jan 26 18:48:49 crc kubenswrapper[4787]: I0126 18:48:49.893337 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rb8kn" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="registry-server" containerID="cri-o://0a82065f41d60c4e1b9bca3fddb2ba108bce0c309f4da8432ce274066c24c85b" gracePeriod=2 Jan 26 18:48:50 crc kubenswrapper[4787]: I0126 18:48:50.905242 4787 generic.go:334] "Generic (PLEG): container finished" podID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerID="0a82065f41d60c4e1b9bca3fddb2ba108bce0c309f4da8432ce274066c24c85b" exitCode=0 Jan 26 18:48:50 crc kubenswrapper[4787]: I0126 18:48:50.905319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb8kn" event={"ID":"0a7195cb-4bd9-4c89-a82a-608622233f6f","Type":"ContainerDied","Data":"0a82065f41d60c4e1b9bca3fddb2ba108bce0c309f4da8432ce274066c24c85b"} Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.554083 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.652766 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2mgb\" (UniqueName: \"kubernetes.io/projected/0a7195cb-4bd9-4c89-a82a-608622233f6f-kube-api-access-j2mgb\") pod \"0a7195cb-4bd9-4c89-a82a-608622233f6f\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.653262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-catalog-content\") pod \"0a7195cb-4bd9-4c89-a82a-608622233f6f\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.653391 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-utilities\") pod \"0a7195cb-4bd9-4c89-a82a-608622233f6f\" (UID: \"0a7195cb-4bd9-4c89-a82a-608622233f6f\") " Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.654145 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-utilities" (OuterVolumeSpecName: "utilities") pod "0a7195cb-4bd9-4c89-a82a-608622233f6f" (UID: "0a7195cb-4bd9-4c89-a82a-608622233f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.658003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7195cb-4bd9-4c89-a82a-608622233f6f-kube-api-access-j2mgb" (OuterVolumeSpecName: "kube-api-access-j2mgb") pod "0a7195cb-4bd9-4c89-a82a-608622233f6f" (UID: "0a7195cb-4bd9-4c89-a82a-608622233f6f"). InnerVolumeSpecName "kube-api-access-j2mgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.701167 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a7195cb-4bd9-4c89-a82a-608622233f6f" (UID: "0a7195cb-4bd9-4c89-a82a-608622233f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.754989 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2mgb\" (UniqueName: \"kubernetes.io/projected/0a7195cb-4bd9-4c89-a82a-608622233f6f-kube-api-access-j2mgb\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.755025 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.755045 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7195cb-4bd9-4c89-a82a-608622233f6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.913648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb8kn" event={"ID":"0a7195cb-4bd9-4c89-a82a-608622233f6f","Type":"ContainerDied","Data":"5bce760f54fcd6a01c0882566fdacf86070fca0047899557563f44be0f6121ea"} Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.913697 4787 scope.go:117] "RemoveContainer" containerID="0a82065f41d60c4e1b9bca3fddb2ba108bce0c309f4da8432ce274066c24c85b" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.913762 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb8kn" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.955641 4787 scope.go:117] "RemoveContainer" containerID="9be22c02ba098039aed52f373268ea05b2df0c2f802fe7071d8285f2c56f8c45" Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.964747 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rb8kn"] Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.972168 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rb8kn"] Jan 26 18:48:51 crc kubenswrapper[4787]: I0126 18:48:51.978104 4787 scope.go:117] "RemoveContainer" containerID="3f007570945f59a3c32bf7374d97ec7954cc8124612c549d9077d51d5f34d0d7" Jan 26 18:48:53 crc kubenswrapper[4787]: I0126 18:48:53.597348 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" path="/var/lib/kubelet/pods/0a7195cb-4bd9-4c89-a82a-608622233f6f/volumes" Jan 26 18:48:59 crc kubenswrapper[4787]: I0126 18:48:59.589103 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:48:59 crc kubenswrapper[4787]: E0126 18:48:59.589894 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:49:13 crc kubenswrapper[4787]: I0126 18:49:13.590125 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:49:13 crc kubenswrapper[4787]: E0126 18:49:13.590763 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:49:28 crc kubenswrapper[4787]: I0126 18:49:28.589297 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:49:28 crc kubenswrapper[4787]: E0126 18:49:28.589897 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:49:43 crc kubenswrapper[4787]: I0126 18:49:43.589378 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:49:43 crc kubenswrapper[4787]: E0126 18:49:43.590126 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:49:54 crc kubenswrapper[4787]: I0126 18:49:54.589188 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:49:54 crc kubenswrapper[4787]: E0126 18:49:54.590814 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:50:09 crc kubenswrapper[4787]: I0126 18:50:09.590270 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:50:09 crc kubenswrapper[4787]: E0126 18:50:09.591169 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:50:24 crc kubenswrapper[4787]: I0126 18:50:24.588989 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:50:24 crc kubenswrapper[4787]: E0126 18:50:24.589799 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:50:36 crc kubenswrapper[4787]: I0126 18:50:36.591053 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:50:36 crc kubenswrapper[4787]: E0126 18:50:36.592177 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:50:50 crc kubenswrapper[4787]: I0126 18:50:50.589428 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:50:50 crc kubenswrapper[4787]: E0126 18:50:50.590187 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:51:02 crc kubenswrapper[4787]: I0126 18:51:02.591063 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:51:02 crc kubenswrapper[4787]: E0126 18:51:02.591938 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:51:17 crc kubenswrapper[4787]: I0126 18:51:17.589901 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:51:17 crc kubenswrapper[4787]: E0126 18:51:17.590668 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:51:28 crc kubenswrapper[4787]: I0126 18:51:28.590617 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:51:28 crc kubenswrapper[4787]: E0126 18:51:28.591774 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:51:42 crc kubenswrapper[4787]: I0126 18:51:42.589209 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:51:42 crc kubenswrapper[4787]: E0126 18:51:42.589728 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:51:53 crc kubenswrapper[4787]: I0126 18:51:53.589267 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:51:53 crc kubenswrapper[4787]: E0126 18:51:53.590067 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.399636 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l54px"] Jan 26 18:52:00 crc kubenswrapper[4787]: E0126 18:52:00.400692 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="extract-content" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.400709 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="extract-content" Jan 26 18:52:00 crc kubenswrapper[4787]: E0126 18:52:00.400732 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="registry-server" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.400738 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="registry-server" Jan 26 18:52:00 crc kubenswrapper[4787]: E0126 18:52:00.400751 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="extract-utilities" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.400757 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="extract-utilities" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.400890 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7195cb-4bd9-4c89-a82a-608622233f6f" containerName="registry-server" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.401850 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.419731 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l54px"] Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.472922 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2779l\" (UniqueName: \"kubernetes.io/projected/a4ae2621-7acf-4c81-80ca-a721bb67913c-kube-api-access-2779l\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.472997 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-utilities\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.473064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-catalog-content\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.575038 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-utilities\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.575239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-catalog-content\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.575342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2779l\" (UniqueName: \"kubernetes.io/projected/a4ae2621-7acf-4c81-80ca-a721bb67913c-kube-api-access-2779l\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.575738 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-utilities\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.575780 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-catalog-content\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.606761 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2779l\" (UniqueName: \"kubernetes.io/projected/a4ae2621-7acf-4c81-80ca-a721bb67913c-kube-api-access-2779l\") pod \"redhat-marketplace-l54px\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:00 crc kubenswrapper[4787]: I0126 18:52:00.737365 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:01 crc kubenswrapper[4787]: I0126 18:52:01.192899 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l54px"] Jan 26 18:52:01 crc kubenswrapper[4787]: I0126 18:52:01.335590 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l54px" event={"ID":"a4ae2621-7acf-4c81-80ca-a721bb67913c","Type":"ContainerStarted","Data":"f67dd798407a044f5bd5ae82d57b3f739156a1fd9d5d7eb779e9fd4e1b1d430e"} Jan 26 18:52:02 crc kubenswrapper[4787]: I0126 18:52:02.344211 4787 generic.go:334] "Generic (PLEG): container finished" podID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerID="3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d" exitCode=0 Jan 26 18:52:02 crc kubenswrapper[4787]: I0126 18:52:02.344308 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l54px" event={"ID":"a4ae2621-7acf-4c81-80ca-a721bb67913c","Type":"ContainerDied","Data":"3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d"} Jan 26 18:52:04 crc kubenswrapper[4787]: I0126 18:52:04.363179 4787 generic.go:334] "Generic (PLEG): container finished" podID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerID="b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717" exitCode=0 Jan 26 18:52:04 crc kubenswrapper[4787]: I0126 18:52:04.363305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l54px" event={"ID":"a4ae2621-7acf-4c81-80ca-a721bb67913c","Type":"ContainerDied","Data":"b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717"} Jan 26 18:52:05 crc kubenswrapper[4787]: I0126 18:52:05.375040 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l54px" event={"ID":"a4ae2621-7acf-4c81-80ca-a721bb67913c","Type":"ContainerStarted","Data":"bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2"} Jan 26 18:52:05 crc kubenswrapper[4787]: I0126 18:52:05.397550 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l54px" podStartSLOduration=2.945461401 podStartE2EDuration="5.397522716s" podCreationTimestamp="2026-01-26 18:52:00 +0000 UTC" firstStartedPulling="2026-01-26 18:52:02.346046281 +0000 UTC m=+4091.053182414" lastFinishedPulling="2026-01-26 18:52:04.798107586 +0000 UTC m=+4093.505243729" observedRunningTime="2026-01-26 18:52:05.392961534 +0000 UTC m=+4094.100097667" watchObservedRunningTime="2026-01-26 18:52:05.397522716 +0000 UTC m=+4094.104658849" Jan 26 18:52:06 crc kubenswrapper[4787]: I0126 18:52:06.589326 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:52:06 crc kubenswrapper[4787]: E0126 18:52:06.589874 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:52:10 crc kubenswrapper[4787]: I0126 18:52:10.737824 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:10 crc kubenswrapper[4787]: I0126 18:52:10.738326 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:10 crc kubenswrapper[4787]: I0126 18:52:10.783877 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:11 crc kubenswrapper[4787]: I0126 18:52:11.475094 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:11 crc kubenswrapper[4787]: I0126 18:52:11.523585 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l54px"] Jan 26 18:52:13 crc kubenswrapper[4787]: I0126 18:52:13.436986 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l54px" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="registry-server" containerID="cri-o://bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2" gracePeriod=2 Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.015115 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.089160 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-catalog-content\") pod \"a4ae2621-7acf-4c81-80ca-a721bb67913c\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.089227 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2779l\" (UniqueName: \"kubernetes.io/projected/a4ae2621-7acf-4c81-80ca-a721bb67913c-kube-api-access-2779l\") pod \"a4ae2621-7acf-4c81-80ca-a721bb67913c\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.089264 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-utilities\") pod \"a4ae2621-7acf-4c81-80ca-a721bb67913c\" (UID: \"a4ae2621-7acf-4c81-80ca-a721bb67913c\") " Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.090590 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-utilities" (OuterVolumeSpecName: "utilities") pod "a4ae2621-7acf-4c81-80ca-a721bb67913c" (UID: "a4ae2621-7acf-4c81-80ca-a721bb67913c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.114675 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4ae2621-7acf-4c81-80ca-a721bb67913c" (UID: "a4ae2621-7acf-4c81-80ca-a721bb67913c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.175791 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ae2621-7acf-4c81-80ca-a721bb67913c-kube-api-access-2779l" (OuterVolumeSpecName: "kube-api-access-2779l") pod "a4ae2621-7acf-4c81-80ca-a721bb67913c" (UID: "a4ae2621-7acf-4c81-80ca-a721bb67913c"). InnerVolumeSpecName "kube-api-access-2779l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.190489 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2779l\" (UniqueName: \"kubernetes.io/projected/a4ae2621-7acf-4c81-80ca-a721bb67913c-kube-api-access-2779l\") on node \"crc\" DevicePath \"\"" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.190758 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.190841 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ae2621-7acf-4c81-80ca-a721bb67913c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.443766 4787 generic.go:334] "Generic (PLEG): container finished" podID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerID="bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2" exitCode=0 Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.443810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l54px" event={"ID":"a4ae2621-7acf-4c81-80ca-a721bb67913c","Type":"ContainerDied","Data":"bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2"} Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.443837 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l54px" event={"ID":"a4ae2621-7acf-4c81-80ca-a721bb67913c","Type":"ContainerDied","Data":"f67dd798407a044f5bd5ae82d57b3f739156a1fd9d5d7eb779e9fd4e1b1d430e"} Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.443853 4787 scope.go:117] "RemoveContainer" containerID="bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.443851 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l54px" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.459669 4787 scope.go:117] "RemoveContainer" containerID="b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.485227 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l54px"] Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.492291 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l54px"] Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.500606 4787 scope.go:117] "RemoveContainer" containerID="3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.518720 4787 scope.go:117] "RemoveContainer" containerID="bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2" Jan 26 18:52:14 crc kubenswrapper[4787]: E0126 18:52:14.519236 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2\": container with ID starting with bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2 not found: ID does not exist" containerID="bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.519326 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2"} err="failed to get container status \"bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2\": rpc error: code = NotFound desc = could not find container \"bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2\": container with ID starting with bb528ccede3b8c38c67bd4600367884c8b20b5087d9394c9cc0eac6c2e7299e2 not found: ID does not exist" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.519355 4787 scope.go:117] "RemoveContainer" containerID="b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717" Jan 26 18:52:14 crc kubenswrapper[4787]: E0126 18:52:14.520496 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717\": container with ID starting with b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717 not found: ID does not exist" containerID="b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.520533 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717"} err="failed to get container status \"b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717\": rpc error: code = NotFound desc = could not find container \"b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717\": container with ID starting with b3cc1530d9065cddacaac2495234cf99f0253fa75103fb0256734ea4f8441717 not found: ID does not exist" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.520562 4787 scope.go:117] "RemoveContainer" containerID="3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d" Jan 26 18:52:14 crc kubenswrapper[4787]: E0126 18:52:14.521140 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d\": container with ID starting with 3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d not found: ID does not exist" containerID="3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d" Jan 26 18:52:14 crc kubenswrapper[4787]: I0126 18:52:14.521193 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d"} err="failed to get container status \"3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d\": rpc error: code = NotFound desc = could not find container \"3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d\": container with ID starting with 3825f1ead83dcd3102336fac0d11d166fa80ad61919bc98a2698931d693da33d not found: ID does not exist" Jan 26 18:52:15 crc kubenswrapper[4787]: I0126 18:52:15.601333 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" path="/var/lib/kubelet/pods/a4ae2621-7acf-4c81-80ca-a721bb67913c/volumes" Jan 26 18:52:20 crc kubenswrapper[4787]: I0126 18:52:20.589731 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:52:20 crc kubenswrapper[4787]: E0126 18:52:20.590559 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:52:32 crc kubenswrapper[4787]: I0126 18:52:32.589262 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:52:32 crc kubenswrapper[4787]: E0126 18:52:32.590034 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:52:43 crc kubenswrapper[4787]: I0126 18:52:43.589918 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:52:43 crc kubenswrapper[4787]: E0126 18:52:43.590840 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.607855 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5ncr"] Jan 26 18:52:54 crc kubenswrapper[4787]: E0126 18:52:54.608905 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="extract-utilities" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.608925 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="extract-utilities" Jan 26 18:52:54 crc kubenswrapper[4787]: E0126 18:52:54.608975 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="extract-content" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.608987 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="extract-content" Jan 26 18:52:54 crc kubenswrapper[4787]: E0126 18:52:54.609008 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="registry-server" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.609019 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="registry-server" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.609277 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ae2621-7acf-4c81-80ca-a721bb67913c" containerName="registry-server" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.625484 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.631108 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5ncr"] Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.696831 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-catalog-content\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.697023 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwll\" (UniqueName: \"kubernetes.io/projected/d11fead7-a539-4705-b0e3-b98d6a17b9df-kube-api-access-rnwll\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.697042 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-utilities\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.798603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwll\" (UniqueName: \"kubernetes.io/projected/d11fead7-a539-4705-b0e3-b98d6a17b9df-kube-api-access-rnwll\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.798647 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-utilities\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.798670 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-catalog-content\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.799550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-catalog-content\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.799637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-utilities\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.834216 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwll\" (UniqueName: \"kubernetes.io/projected/d11fead7-a539-4705-b0e3-b98d6a17b9df-kube-api-access-rnwll\") pod \"community-operators-s5ncr\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:54 crc kubenswrapper[4787]: I0126 18:52:54.959566 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:52:55 crc kubenswrapper[4787]: I0126 18:52:55.451311 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5ncr"] Jan 26 18:52:55 crc kubenswrapper[4787]: I0126 18:52:55.775174 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerStarted","Data":"89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713"} Jan 26 18:52:55 crc kubenswrapper[4787]: I0126 18:52:55.775228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerStarted","Data":"4605459189e309883d45d345f3bc7960f19fe427e1db91bba8b6713080eb62c9"} Jan 26 18:52:56 crc kubenswrapper[4787]: I0126 18:52:56.590225 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:52:56 crc kubenswrapper[4787]: I0126 18:52:56.787046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"e27f3e7902e06e7049cc362e50aab0c2c722204a58d994dc1031a1ebe832c3be"} Jan 26 18:52:56 crc kubenswrapper[4787]: I0126 18:52:56.789175 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerDied","Data":"89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713"} Jan 26 18:52:56 crc kubenswrapper[4787]: I0126 18:52:56.789017 4787 generic.go:334] "Generic (PLEG): container finished" podID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerID="89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713" exitCode=0 Jan 26 18:52:58 crc kubenswrapper[4787]: I0126 18:52:58.805764 4787 generic.go:334] "Generic (PLEG): container finished" podID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerID="071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8" exitCode=0 Jan 26 18:52:58 crc kubenswrapper[4787]: I0126 18:52:58.805832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerDied","Data":"071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8"} Jan 26 18:53:00 crc kubenswrapper[4787]: I0126 18:53:00.831360 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerStarted","Data":"2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685"} Jan 26 18:53:00 crc kubenswrapper[4787]: I0126 18:53:00.851065 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5ncr" podStartSLOduration=4.08842518 podStartE2EDuration="6.851048813s" podCreationTimestamp="2026-01-26 18:52:54 +0000 UTC" firstStartedPulling="2026-01-26 18:52:56.79238038 +0000 UTC m=+4145.499516513" lastFinishedPulling="2026-01-26 18:52:59.555004013 +0000 UTC m=+4148.262140146" observedRunningTime="2026-01-26 18:53:00.848343786 +0000 UTC m=+4149.555479919" watchObservedRunningTime="2026-01-26 18:53:00.851048813 +0000 UTC m=+4149.558184946" Jan 26 18:53:04 crc kubenswrapper[4787]: I0126 18:53:04.960409 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:53:04 crc kubenswrapper[4787]: I0126 18:53:04.960756 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:53:05 crc kubenswrapper[4787]: I0126 18:53:05.010912 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:53:05 crc kubenswrapper[4787]: I0126 18:53:05.913066 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:53:05 crc kubenswrapper[4787]: I0126 18:53:05.967592 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5ncr"] Jan 26 18:53:07 crc kubenswrapper[4787]: I0126 18:53:07.881931 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5ncr" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="registry-server" containerID="cri-o://2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685" gracePeriod=2 Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.365936 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.461307 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnwll\" (UniqueName: \"kubernetes.io/projected/d11fead7-a539-4705-b0e3-b98d6a17b9df-kube-api-access-rnwll\") pod \"d11fead7-a539-4705-b0e3-b98d6a17b9df\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.461404 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-utilities\") pod \"d11fead7-a539-4705-b0e3-b98d6a17b9df\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.461510 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-catalog-content\") pod \"d11fead7-a539-4705-b0e3-b98d6a17b9df\" (UID: \"d11fead7-a539-4705-b0e3-b98d6a17b9df\") " Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.462499 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-utilities" (OuterVolumeSpecName: "utilities") pod "d11fead7-a539-4705-b0e3-b98d6a17b9df" (UID: "d11fead7-a539-4705-b0e3-b98d6a17b9df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.513320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d11fead7-a539-4705-b0e3-b98d6a17b9df" (UID: "d11fead7-a539-4705-b0e3-b98d6a17b9df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.563078 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.563118 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11fead7-a539-4705-b0e3-b98d6a17b9df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.776894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11fead7-a539-4705-b0e3-b98d6a17b9df-kube-api-access-rnwll" (OuterVolumeSpecName: "kube-api-access-rnwll") pod "d11fead7-a539-4705-b0e3-b98d6a17b9df" (UID: "d11fead7-a539-4705-b0e3-b98d6a17b9df"). InnerVolumeSpecName "kube-api-access-rnwll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.867149 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnwll\" (UniqueName: \"kubernetes.io/projected/d11fead7-a539-4705-b0e3-b98d6a17b9df-kube-api-access-rnwll\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.912147 4787 generic.go:334] "Generic (PLEG): container finished" podID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerID="2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685" exitCode=0 Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.912212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerDied","Data":"2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685"} Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.912247 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5ncr" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.912275 4787 scope.go:117] "RemoveContainer" containerID="2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.912259 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5ncr" event={"ID":"d11fead7-a539-4705-b0e3-b98d6a17b9df","Type":"ContainerDied","Data":"4605459189e309883d45d345f3bc7960f19fe427e1db91bba8b6713080eb62c9"} Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.933070 4787 scope.go:117] "RemoveContainer" containerID="071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8" Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.956177 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5ncr"] Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.961282 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5ncr"] Jan 26 18:53:09 crc kubenswrapper[4787]: I0126 18:53:09.987657 4787 scope.go:117] "RemoveContainer" containerID="89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713" Jan 26 18:53:10 crc kubenswrapper[4787]: I0126 18:53:10.006076 4787 scope.go:117] "RemoveContainer" containerID="2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685" Jan 26 18:53:10 crc kubenswrapper[4787]: E0126 18:53:10.006534 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685\": container with ID starting with 2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685 not found: ID does not exist" containerID="2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685" Jan 26 18:53:10 crc kubenswrapper[4787]: I0126 18:53:10.006566 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685"} err="failed to get container status \"2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685\": rpc error: code = NotFound desc = could not find container \"2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685\": container with ID starting with 2f98602a54bb0b63d18a8144eaecceffe46e262a85c23716b3dd6647ec6ef685 not found: ID does not exist" Jan 26 18:53:10 crc kubenswrapper[4787]: I0126 18:53:10.006588 4787 scope.go:117] "RemoveContainer" containerID="071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8" Jan 26 18:53:10 crc kubenswrapper[4787]: E0126 18:53:10.006814 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8\": container with ID starting with 071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8 not found: ID does not exist" containerID="071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8" Jan 26 18:53:10 crc kubenswrapper[4787]: I0126 18:53:10.006843 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8"} err="failed to get container status \"071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8\": rpc error: code = NotFound desc = could not find container \"071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8\": container with ID starting with 071957fedcf740e3ad2f8ba8c4af942c38e740938e33c95ab2459804562fe6a8 not found: ID does not exist" Jan 26 18:53:10 crc kubenswrapper[4787]: I0126 18:53:10.006862 4787 scope.go:117] "RemoveContainer" containerID="89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713" Jan 26 18:53:10 crc kubenswrapper[4787]: E0126 18:53:10.007098 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713\": container with ID starting with 89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713 not found: ID does not exist" containerID="89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713" Jan 26 18:53:10 crc kubenswrapper[4787]: I0126 18:53:10.007127 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713"} err="failed to get container status \"89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713\": rpc error: code = NotFound desc = could not find container \"89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713\": container with ID starting with 89f5203d2ec979465b2feed631bcb18b2aea9b005ac58cd8b284ed81a1e29713 not found: ID does not exist" Jan 26 18:53:11 crc kubenswrapper[4787]: I0126 18:53:11.598555 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" path="/var/lib/kubelet/pods/d11fead7-a539-4705-b0e3-b98d6a17b9df/volumes" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.467070 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qppxv"] Jan 26 18:54:26 crc kubenswrapper[4787]: E0126 18:54:26.468414 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="extract-content" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.468433 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="extract-content" Jan 26 18:54:26 crc kubenswrapper[4787]: E0126 18:54:26.468442 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="registry-server" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.468449 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="registry-server" Jan 26 18:54:26 crc kubenswrapper[4787]: E0126 18:54:26.468476 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="extract-utilities" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.468485 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="extract-utilities" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.468669 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fead7-a539-4705-b0e3-b98d6a17b9df" containerName="registry-server" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.473464 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.489943 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qppxv"] Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.569689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkmdr\" (UniqueName: \"kubernetes.io/projected/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-kube-api-access-xkmdr\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.569826 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-utilities\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.569864 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-catalog-content\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.671271 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-utilities\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.671346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-catalog-content\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.671402 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkmdr\" (UniqueName: \"kubernetes.io/projected/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-kube-api-access-xkmdr\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.671730 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-utilities\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.671753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-catalog-content\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.695147 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkmdr\" (UniqueName: \"kubernetes.io/projected/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-kube-api-access-xkmdr\") pod \"redhat-operators-qppxv\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:26 crc kubenswrapper[4787]: I0126 18:54:26.804928 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:27 crc kubenswrapper[4787]: I0126 18:54:27.264876 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qppxv"] Jan 26 18:54:27 crc kubenswrapper[4787]: I0126 18:54:27.444670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerStarted","Data":"a05503941a4190e34ab5e4a29169accf0560b18d98ca2cdc289250d08a3b72b5"} Jan 26 18:54:28 crc kubenswrapper[4787]: I0126 18:54:28.471634 4787 generic.go:334] "Generic (PLEG): container finished" podID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerID="943ee417936d98e32ca008bae2472744398c0029533f5cb9ef9d3390c361fe0c" exitCode=0 Jan 26 18:54:28 crc kubenswrapper[4787]: I0126 18:54:28.471691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerDied","Data":"943ee417936d98e32ca008bae2472744398c0029533f5cb9ef9d3390c361fe0c"} Jan 26 18:54:28 crc kubenswrapper[4787]: I0126 18:54:28.474346 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:54:29 crc kubenswrapper[4787]: I0126 18:54:29.484836 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerStarted","Data":"b3e6f531a1b7326d80540064b5a4674a6ff6a4e17749332432c78fc2c64460f9"} Jan 26 18:54:30 crc kubenswrapper[4787]: I0126 18:54:30.495333 4787 generic.go:334] "Generic (PLEG): container finished" podID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerID="b3e6f531a1b7326d80540064b5a4674a6ff6a4e17749332432c78fc2c64460f9" exitCode=0 Jan 26 18:54:30 crc kubenswrapper[4787]: I0126 18:54:30.495418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerDied","Data":"b3e6f531a1b7326d80540064b5a4674a6ff6a4e17749332432c78fc2c64460f9"} Jan 26 18:54:31 crc kubenswrapper[4787]: I0126 18:54:31.505642 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerStarted","Data":"42a90fddfd2a45ffecaec7f5060b561ff1272bc1f7bcb08f2db22cb5be4e6635"} Jan 26 18:54:31 crc kubenswrapper[4787]: I0126 18:54:31.526488 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qppxv" podStartSLOduration=3.057919433 podStartE2EDuration="5.526464241s" podCreationTimestamp="2026-01-26 18:54:26 +0000 UTC" firstStartedPulling="2026-01-26 18:54:28.474016957 +0000 UTC m=+4237.181153080" lastFinishedPulling="2026-01-26 18:54:30.942561765 +0000 UTC m=+4239.649697888" observedRunningTime="2026-01-26 18:54:31.524106753 +0000 UTC m=+4240.231242906" watchObservedRunningTime="2026-01-26 18:54:31.526464241 +0000 UTC m=+4240.233600414" Jan 26 18:54:36 crc kubenswrapper[4787]: I0126 18:54:36.805996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:36 crc kubenswrapper[4787]: I0126 18:54:36.807380 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:36 crc kubenswrapper[4787]: I0126 18:54:36.849047 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:37 crc kubenswrapper[4787]: I0126 18:54:37.598062 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:37 crc kubenswrapper[4787]: I0126 18:54:37.653852 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qppxv"] Jan 26 18:54:39 crc kubenswrapper[4787]: I0126 18:54:39.559194 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qppxv" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="registry-server" containerID="cri-o://42a90fddfd2a45ffecaec7f5060b561ff1272bc1f7bcb08f2db22cb5be4e6635" gracePeriod=2 Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.580563 4787 generic.go:334] "Generic (PLEG): container finished" podID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerID="42a90fddfd2a45ffecaec7f5060b561ff1272bc1f7bcb08f2db22cb5be4e6635" exitCode=0 Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.580639 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerDied","Data":"42a90fddfd2a45ffecaec7f5060b561ff1272bc1f7bcb08f2db22cb5be4e6635"} Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.768151 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.909907 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-utilities\") pod \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.910023 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkmdr\" (UniqueName: \"kubernetes.io/projected/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-kube-api-access-xkmdr\") pod \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.910064 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-catalog-content\") pod \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\" (UID: \"79b449a7-6f19-4986-9ceb-b460b4b6a8b2\") " Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.911388 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-utilities" (OuterVolumeSpecName: "utilities") pod "79b449a7-6f19-4986-9ceb-b460b4b6a8b2" (UID: "79b449a7-6f19-4986-9ceb-b460b4b6a8b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:54:42 crc kubenswrapper[4787]: I0126 18:54:42.920660 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-kube-api-access-xkmdr" (OuterVolumeSpecName: "kube-api-access-xkmdr") pod "79b449a7-6f19-4986-9ceb-b460b4b6a8b2" (UID: "79b449a7-6f19-4986-9ceb-b460b4b6a8b2"). InnerVolumeSpecName "kube-api-access-xkmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.012720 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkmdr\" (UniqueName: \"kubernetes.io/projected/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-kube-api-access-xkmdr\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.012992 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.035737 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79b449a7-6f19-4986-9ceb-b460b4b6a8b2" (UID: "79b449a7-6f19-4986-9ceb-b460b4b6a8b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.114172 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b449a7-6f19-4986-9ceb-b460b4b6a8b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.788699 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qppxv" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.793779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qppxv" event={"ID":"79b449a7-6f19-4986-9ceb-b460b4b6a8b2","Type":"ContainerDied","Data":"a05503941a4190e34ab5e4a29169accf0560b18d98ca2cdc289250d08a3b72b5"} Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.793838 4787 scope.go:117] "RemoveContainer" containerID="42a90fddfd2a45ffecaec7f5060b561ff1272bc1f7bcb08f2db22cb5be4e6635" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.829553 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qppxv"] Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.835507 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qppxv"] Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.844163 4787 scope.go:117] "RemoveContainer" containerID="b3e6f531a1b7326d80540064b5a4674a6ff6a4e17749332432c78fc2c64460f9" Jan 26 18:54:43 crc kubenswrapper[4787]: I0126 18:54:43.864262 4787 scope.go:117] "RemoveContainer" containerID="943ee417936d98e32ca008bae2472744398c0029533f5cb9ef9d3390c361fe0c" Jan 26 18:54:45 crc kubenswrapper[4787]: I0126 18:54:45.600895 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" path="/var/lib/kubelet/pods/79b449a7-6f19-4986-9ceb-b460b4b6a8b2/volumes" Jan 26 18:55:16 crc kubenswrapper[4787]: I0126 18:55:16.808414 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:55:16 crc kubenswrapper[4787]: I0126 18:55:16.808900 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:55:46 crc kubenswrapper[4787]: I0126 18:55:46.807673 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:55:46 crc kubenswrapper[4787]: I0126 18:55:46.808278 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:56:16 crc kubenswrapper[4787]: I0126 18:56:16.807894 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:56:16 crc kubenswrapper[4787]: I0126 18:56:16.808892 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:56:16 crc kubenswrapper[4787]: I0126 18:56:16.808986 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:56:16 crc kubenswrapper[4787]: I0126 18:56:16.809938 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e27f3e7902e06e7049cc362e50aab0c2c722204a58d994dc1031a1ebe832c3be"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:56:16 crc kubenswrapper[4787]: I0126 18:56:16.810028 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://e27f3e7902e06e7049cc362e50aab0c2c722204a58d994dc1031a1ebe832c3be" gracePeriod=600 Jan 26 18:56:17 crc kubenswrapper[4787]: I0126 18:56:17.525324 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="e27f3e7902e06e7049cc362e50aab0c2c722204a58d994dc1031a1ebe832c3be" exitCode=0 Jan 26 18:56:17 crc kubenswrapper[4787]: I0126 18:56:17.525429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"e27f3e7902e06e7049cc362e50aab0c2c722204a58d994dc1031a1ebe832c3be"} Jan 26 18:56:17 crc kubenswrapper[4787]: I0126 18:56:17.526088 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691"} Jan 26 18:56:17 crc kubenswrapper[4787]: I0126 18:56:17.526117 4787 scope.go:117] "RemoveContainer" containerID="644d50656892343e01c7291828db81fb0753463bc43b241f04dd274529cd7539" Jan 26 18:58:46 crc kubenswrapper[4787]: I0126 18:58:46.807878 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:58:46 crc kubenswrapper[4787]: I0126 18:58:46.808353 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.455131 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4nw7"] Jan 26 18:59:00 crc kubenswrapper[4787]: E0126 18:59:00.456113 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="registry-server" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.456135 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="registry-server" Jan 26 18:59:00 crc kubenswrapper[4787]: E0126 18:59:00.456151 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="extract-utilities" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.456164 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="extract-utilities" Jan 26 18:59:00 crc kubenswrapper[4787]: E0126 18:59:00.456186 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="extract-content" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.456198 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="extract-content" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.456436 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b449a7-6f19-4986-9ceb-b460b4b6a8b2" containerName="registry-server" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.458239 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.468010 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4nw7"] Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.634717 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4nbf\" (UniqueName: \"kubernetes.io/projected/153c3536-4f42-4b33-884c-0d8ed91d0e31-kube-api-access-f4nbf\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.634823 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-utilities\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.634855 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-catalog-content\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.735744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-utilities\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.735794 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-catalog-content\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.735858 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4nbf\" (UniqueName: \"kubernetes.io/projected/153c3536-4f42-4b33-884c-0d8ed91d0e31-kube-api-access-f4nbf\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.736387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-utilities\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.736435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-catalog-content\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.768852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4nbf\" (UniqueName: \"kubernetes.io/projected/153c3536-4f42-4b33-884c-0d8ed91d0e31-kube-api-access-f4nbf\") pod \"certified-operators-c4nw7\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:00 crc kubenswrapper[4787]: I0126 18:59:00.794148 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:01 crc kubenswrapper[4787]: I0126 18:59:01.056509 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4nw7"] Jan 26 18:59:01 crc kubenswrapper[4787]: I0126 18:59:01.758914 4787 generic.go:334] "Generic (PLEG): container finished" podID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerID="2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888" exitCode=0 Jan 26 18:59:01 crc kubenswrapper[4787]: I0126 18:59:01.758977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerDied","Data":"2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888"} Jan 26 18:59:01 crc kubenswrapper[4787]: I0126 18:59:01.759226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerStarted","Data":"22389550a7c72fb88777d96362029d128a4853cb1c909e44ab2d1e50376bba25"} Jan 26 18:59:02 crc kubenswrapper[4787]: I0126 18:59:02.769661 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerStarted","Data":"6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae"} Jan 26 18:59:03 crc kubenswrapper[4787]: I0126 18:59:03.779031 4787 generic.go:334] "Generic (PLEG): container finished" podID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerID="6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae" exitCode=0 Jan 26 18:59:03 crc kubenswrapper[4787]: I0126 18:59:03.779113 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerDied","Data":"6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae"} Jan 26 18:59:04 crc kubenswrapper[4787]: I0126 18:59:04.787885 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerStarted","Data":"0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14"} Jan 26 18:59:04 crc kubenswrapper[4787]: I0126 18:59:04.805798 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4nw7" podStartSLOduration=2.366462436 podStartE2EDuration="4.805776767s" podCreationTimestamp="2026-01-26 18:59:00 +0000 UTC" firstStartedPulling="2026-01-26 18:59:01.760476218 +0000 UTC m=+4510.467612351" lastFinishedPulling="2026-01-26 18:59:04.199790549 +0000 UTC m=+4512.906926682" observedRunningTime="2026-01-26 18:59:04.803426709 +0000 UTC m=+4513.510562852" watchObservedRunningTime="2026-01-26 18:59:04.805776767 +0000 UTC m=+4513.512912910" Jan 26 18:59:10 crc kubenswrapper[4787]: I0126 18:59:10.794333 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:10 crc kubenswrapper[4787]: I0126 18:59:10.794989 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:10 crc kubenswrapper[4787]: I0126 18:59:10.840899 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:11 crc kubenswrapper[4787]: I0126 18:59:11.889896 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:11 crc kubenswrapper[4787]: I0126 18:59:11.938780 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4nw7"] Jan 26 18:59:13 crc kubenswrapper[4787]: I0126 18:59:13.851577 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4nw7" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="registry-server" containerID="cri-o://0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14" gracePeriod=2 Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.738749 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.859093 4787 generic.go:334] "Generic (PLEG): container finished" podID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerID="0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14" exitCode=0 Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.860099 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerDied","Data":"0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14"} Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.860146 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4nw7" event={"ID":"153c3536-4f42-4b33-884c-0d8ed91d0e31","Type":"ContainerDied","Data":"22389550a7c72fb88777d96362029d128a4853cb1c909e44ab2d1e50376bba25"} Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.860146 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4nw7" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.860166 4787 scope.go:117] "RemoveContainer" containerID="0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.876589 4787 scope.go:117] "RemoveContainer" containerID="6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.892969 4787 scope.go:117] "RemoveContainer" containerID="2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.915600 4787 scope.go:117] "RemoveContainer" containerID="0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14" Jan 26 18:59:14 crc kubenswrapper[4787]: E0126 18:59:14.916313 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14\": container with ID starting with 0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14 not found: ID does not exist" containerID="0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.916381 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14"} err="failed to get container status \"0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14\": rpc error: code = NotFound desc = could not find container \"0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14\": container with ID starting with 0ec20f7298db5e21dc2393bc3d0cb550f2b0630578fce8f408b9f8b32c800f14 not found: ID does not exist" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.916420 4787 scope.go:117] "RemoveContainer" containerID="6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae" Jan 26 18:59:14 crc kubenswrapper[4787]: E0126 18:59:14.916817 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae\": container with ID starting with 6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae not found: ID does not exist" containerID="6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.916852 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae"} err="failed to get container status \"6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae\": rpc error: code = NotFound desc = could not find container \"6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae\": container with ID starting with 6f71e754eee16f8ab02e1a915d1082d0f7cbf61f3bfda013a2e8205389071bae not found: ID does not exist" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.916878 4787 scope.go:117] "RemoveContainer" containerID="2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888" Jan 26 18:59:14 crc kubenswrapper[4787]: E0126 18:59:14.917142 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888\": container with ID starting with 2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888 not found: ID does not exist" containerID="2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.917168 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888"} err="failed to get container status \"2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888\": rpc error: code = NotFound desc = could not find container \"2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888\": container with ID starting with 2ecfb8ced7432d8e03d5aeced13ebbdc1684de4b4fd5fc471cb73157ff074888 not found: ID does not exist" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.934840 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4nbf\" (UniqueName: \"kubernetes.io/projected/153c3536-4f42-4b33-884c-0d8ed91d0e31-kube-api-access-f4nbf\") pod \"153c3536-4f42-4b33-884c-0d8ed91d0e31\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.934914 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-catalog-content\") pod \"153c3536-4f42-4b33-884c-0d8ed91d0e31\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.935094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-utilities\") pod \"153c3536-4f42-4b33-884c-0d8ed91d0e31\" (UID: \"153c3536-4f42-4b33-884c-0d8ed91d0e31\") " Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.936115 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-utilities" (OuterVolumeSpecName: "utilities") pod "153c3536-4f42-4b33-884c-0d8ed91d0e31" (UID: "153c3536-4f42-4b33-884c-0d8ed91d0e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.936515 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.940992 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153c3536-4f42-4b33-884c-0d8ed91d0e31-kube-api-access-f4nbf" (OuterVolumeSpecName: "kube-api-access-f4nbf") pod "153c3536-4f42-4b33-884c-0d8ed91d0e31" (UID: "153c3536-4f42-4b33-884c-0d8ed91d0e31"). InnerVolumeSpecName "kube-api-access-f4nbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:14 crc kubenswrapper[4787]: I0126 18:59:14.984384 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "153c3536-4f42-4b33-884c-0d8ed91d0e31" (UID: "153c3536-4f42-4b33-884c-0d8ed91d0e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:15 crc kubenswrapper[4787]: I0126 18:59:15.038298 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4nbf\" (UniqueName: \"kubernetes.io/projected/153c3536-4f42-4b33-884c-0d8ed91d0e31-kube-api-access-f4nbf\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:15 crc kubenswrapper[4787]: I0126 18:59:15.038346 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153c3536-4f42-4b33-884c-0d8ed91d0e31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:15 crc kubenswrapper[4787]: I0126 18:59:15.190547 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4nw7"] Jan 26 18:59:15 crc kubenswrapper[4787]: I0126 18:59:15.195465 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4nw7"] Jan 26 18:59:15 crc kubenswrapper[4787]: I0126 18:59:15.599532 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" path="/var/lib/kubelet/pods/153c3536-4f42-4b33-884c-0d8ed91d0e31/volumes" Jan 26 18:59:16 crc kubenswrapper[4787]: I0126 18:59:16.807691 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:59:16 crc kubenswrapper[4787]: I0126 18:59:16.808058 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.445243 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-c9g8l"] Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.453085 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-c9g8l"] Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.561475 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-rsg7w"] Jan 26 18:59:29 crc kubenswrapper[4787]: E0126 18:59:29.561811 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="registry-server" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.561831 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="registry-server" Jan 26 18:59:29 crc kubenswrapper[4787]: E0126 18:59:29.561842 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="extract-content" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.561847 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="extract-content" Jan 26 18:59:29 crc kubenswrapper[4787]: E0126 18:59:29.561861 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="extract-utilities" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.561867 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="extract-utilities" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.562013 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="153c3536-4f42-4b33-884c-0d8ed91d0e31" containerName="registry-server" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.562508 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.564725 4787 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-n44nr" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.565012 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.565177 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.565346 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.572935 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rsg7w"] Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.598682 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eff1bd9-25f9-4008-bae4-9854e7bcb8d9" path="/var/lib/kubelet/pods/7eff1bd9-25f9-4008-bae4-9854e7bcb8d9/volumes" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.742757 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/864425bf-c171-4984-8ccc-2b99c9658271-node-mnt\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.742814 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/864425bf-c171-4984-8ccc-2b99c9658271-crc-storage\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.742884 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qzcc\" (UniqueName: \"kubernetes.io/projected/864425bf-c171-4984-8ccc-2b99c9658271-kube-api-access-4qzcc\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.845166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/864425bf-c171-4984-8ccc-2b99c9658271-node-mnt\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.845238 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/864425bf-c171-4984-8ccc-2b99c9658271-crc-storage\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.845312 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qzcc\" (UniqueName: \"kubernetes.io/projected/864425bf-c171-4984-8ccc-2b99c9658271-kube-api-access-4qzcc\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.845562 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/864425bf-c171-4984-8ccc-2b99c9658271-node-mnt\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.846094 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/864425bf-c171-4984-8ccc-2b99c9658271-crc-storage\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.870365 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qzcc\" (UniqueName: \"kubernetes.io/projected/864425bf-c171-4984-8ccc-2b99c9658271-kube-api-access-4qzcc\") pod \"crc-storage-crc-rsg7w\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:29 crc kubenswrapper[4787]: I0126 18:59:29.890871 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:30 crc kubenswrapper[4787]: I0126 18:59:30.299433 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:59:30 crc kubenswrapper[4787]: I0126 18:59:30.306811 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rsg7w"] Jan 26 18:59:30 crc kubenswrapper[4787]: I0126 18:59:30.984144 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rsg7w" event={"ID":"864425bf-c171-4984-8ccc-2b99c9658271","Type":"ContainerStarted","Data":"369734c6cac9d784cbf95fb8d954e7cee7c4521d3e58e16256b597ddd703b8cf"} Jan 26 18:59:30 crc kubenswrapper[4787]: I0126 18:59:30.984481 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rsg7w" event={"ID":"864425bf-c171-4984-8ccc-2b99c9658271","Type":"ContainerStarted","Data":"82066ce9725820e0ec8138e1c05769c4a07f5370aba3a83c33d6d908ced39097"} Jan 26 18:59:31 crc kubenswrapper[4787]: I0126 18:59:31.003454 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-rsg7w" podStartSLOduration=1.538022555 podStartE2EDuration="2.003433175s" podCreationTimestamp="2026-01-26 18:59:29 +0000 UTC" firstStartedPulling="2026-01-26 18:59:30.299199776 +0000 UTC m=+4539.006335909" lastFinishedPulling="2026-01-26 18:59:30.764610406 +0000 UTC m=+4539.471746529" observedRunningTime="2026-01-26 18:59:31.000331969 +0000 UTC m=+4539.707468112" watchObservedRunningTime="2026-01-26 18:59:31.003433175 +0000 UTC m=+4539.710569338" Jan 26 18:59:31 crc kubenswrapper[4787]: E0126 18:59:31.142509 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod864425bf_c171_4984_8ccc_2b99c9658271.slice/crio-369734c6cac9d784cbf95fb8d954e7cee7c4521d3e58e16256b597ddd703b8cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod864425bf_c171_4984_8ccc_2b99c9658271.slice/crio-conmon-369734c6cac9d784cbf95fb8d954e7cee7c4521d3e58e16256b597ddd703b8cf.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:59:31 crc kubenswrapper[4787]: I0126 18:59:31.992424 4787 generic.go:334] "Generic (PLEG): container finished" podID="864425bf-c171-4984-8ccc-2b99c9658271" containerID="369734c6cac9d784cbf95fb8d954e7cee7c4521d3e58e16256b597ddd703b8cf" exitCode=0 Jan 26 18:59:31 crc kubenswrapper[4787]: I0126 18:59:31.992484 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rsg7w" event={"ID":"864425bf-c171-4984-8ccc-2b99c9658271","Type":"ContainerDied","Data":"369734c6cac9d784cbf95fb8d954e7cee7c4521d3e58e16256b597ddd703b8cf"} Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.273140 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.390311 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/864425bf-c171-4984-8ccc-2b99c9658271-crc-storage\") pod \"864425bf-c171-4984-8ccc-2b99c9658271\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.390435 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/864425bf-c171-4984-8ccc-2b99c9658271-node-mnt\") pod \"864425bf-c171-4984-8ccc-2b99c9658271\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.390476 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qzcc\" (UniqueName: \"kubernetes.io/projected/864425bf-c171-4984-8ccc-2b99c9658271-kube-api-access-4qzcc\") pod \"864425bf-c171-4984-8ccc-2b99c9658271\" (UID: \"864425bf-c171-4984-8ccc-2b99c9658271\") " Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.390623 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/864425bf-c171-4984-8ccc-2b99c9658271-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "864425bf-c171-4984-8ccc-2b99c9658271" (UID: "864425bf-c171-4984-8ccc-2b99c9658271"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.391015 4787 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/864425bf-c171-4984-8ccc-2b99c9658271-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.475862 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864425bf-c171-4984-8ccc-2b99c9658271-kube-api-access-4qzcc" (OuterVolumeSpecName: "kube-api-access-4qzcc") pod "864425bf-c171-4984-8ccc-2b99c9658271" (UID: "864425bf-c171-4984-8ccc-2b99c9658271"). InnerVolumeSpecName "kube-api-access-4qzcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.477286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864425bf-c171-4984-8ccc-2b99c9658271-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "864425bf-c171-4984-8ccc-2b99c9658271" (UID: "864425bf-c171-4984-8ccc-2b99c9658271"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.492331 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qzcc\" (UniqueName: \"kubernetes.io/projected/864425bf-c171-4984-8ccc-2b99c9658271-kube-api-access-4qzcc\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:33 crc kubenswrapper[4787]: I0126 18:59:33.492365 4787 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/864425bf-c171-4984-8ccc-2b99c9658271-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:34 crc kubenswrapper[4787]: I0126 18:59:34.007934 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rsg7w" event={"ID":"864425bf-c171-4984-8ccc-2b99c9658271","Type":"ContainerDied","Data":"82066ce9725820e0ec8138e1c05769c4a07f5370aba3a83c33d6d908ced39097"} Jan 26 18:59:34 crc kubenswrapper[4787]: I0126 18:59:34.008331 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82066ce9725820e0ec8138e1c05769c4a07f5370aba3a83c33d6d908ced39097" Jan 26 18:59:34 crc kubenswrapper[4787]: I0126 18:59:34.008050 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rsg7w" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.081864 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-rsg7w"] Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.087463 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-rsg7w"] Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.202795 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-9xxvr"] Jan 26 18:59:35 crc kubenswrapper[4787]: E0126 18:59:35.203403 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864425bf-c171-4984-8ccc-2b99c9658271" containerName="storage" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.203426 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="864425bf-c171-4984-8ccc-2b99c9658271" containerName="storage" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.203634 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="864425bf-c171-4984-8ccc-2b99c9658271" containerName="storage" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.204429 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.207302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-n44nr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.208400 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.208473 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.209312 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.213900 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/76349694-d2a8-4a96-96b5-25540fcc8797-node-mnt\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.214021 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z65l\" (UniqueName: \"kubernetes.io/projected/76349694-d2a8-4a96-96b5-25540fcc8797-kube-api-access-4z65l\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.214144 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/76349694-d2a8-4a96-96b5-25540fcc8797-crc-storage\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.214877 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-9xxvr"] Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.315311 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/76349694-d2a8-4a96-96b5-25540fcc8797-node-mnt\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.315426 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z65l\" (UniqueName: \"kubernetes.io/projected/76349694-d2a8-4a96-96b5-25540fcc8797-kube-api-access-4z65l\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.315462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/76349694-d2a8-4a96-96b5-25540fcc8797-crc-storage\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.315672 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/76349694-d2a8-4a96-96b5-25540fcc8797-node-mnt\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.316674 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/76349694-d2a8-4a96-96b5-25540fcc8797-crc-storage\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.335391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z65l\" (UniqueName: \"kubernetes.io/projected/76349694-d2a8-4a96-96b5-25540fcc8797-kube-api-access-4z65l\") pod \"crc-storage-crc-9xxvr\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.520066 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.600452 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864425bf-c171-4984-8ccc-2b99c9658271" path="/var/lib/kubelet/pods/864425bf-c171-4984-8ccc-2b99c9658271/volumes" Jan 26 18:59:35 crc kubenswrapper[4787]: I0126 18:59:35.962344 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-9xxvr"] Jan 26 18:59:36 crc kubenswrapper[4787]: I0126 18:59:36.021586 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9xxvr" event={"ID":"76349694-d2a8-4a96-96b5-25540fcc8797","Type":"ContainerStarted","Data":"c23643198ae17965309dec4c6afb82e9dd1fb6e620b4f5035ebefdff31528296"} Jan 26 18:59:37 crc kubenswrapper[4787]: I0126 18:59:37.029932 4787 generic.go:334] "Generic (PLEG): container finished" podID="76349694-d2a8-4a96-96b5-25540fcc8797" containerID="203a9223829e48793f3e95a1f5da5e320090a9e47f2b00a3d929c988de658b4a" exitCode=0 Jan 26 18:59:37 crc kubenswrapper[4787]: I0126 18:59:37.029994 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9xxvr" event={"ID":"76349694-d2a8-4a96-96b5-25540fcc8797","Type":"ContainerDied","Data":"203a9223829e48793f3e95a1f5da5e320090a9e47f2b00a3d929c988de658b4a"} Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.315412 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.460406 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z65l\" (UniqueName: \"kubernetes.io/projected/76349694-d2a8-4a96-96b5-25540fcc8797-kube-api-access-4z65l\") pod \"76349694-d2a8-4a96-96b5-25540fcc8797\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.460455 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/76349694-d2a8-4a96-96b5-25540fcc8797-node-mnt\") pod \"76349694-d2a8-4a96-96b5-25540fcc8797\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.460537 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/76349694-d2a8-4a96-96b5-25540fcc8797-crc-storage\") pod \"76349694-d2a8-4a96-96b5-25540fcc8797\" (UID: \"76349694-d2a8-4a96-96b5-25540fcc8797\") " Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.461303 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76349694-d2a8-4a96-96b5-25540fcc8797-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "76349694-d2a8-4a96-96b5-25540fcc8797" (UID: "76349694-d2a8-4a96-96b5-25540fcc8797"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.465860 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76349694-d2a8-4a96-96b5-25540fcc8797-kube-api-access-4z65l" (OuterVolumeSpecName: "kube-api-access-4z65l") pod "76349694-d2a8-4a96-96b5-25540fcc8797" (UID: "76349694-d2a8-4a96-96b5-25540fcc8797"). InnerVolumeSpecName "kube-api-access-4z65l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.480280 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76349694-d2a8-4a96-96b5-25540fcc8797-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "76349694-d2a8-4a96-96b5-25540fcc8797" (UID: "76349694-d2a8-4a96-96b5-25540fcc8797"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.561509 4787 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/76349694-d2a8-4a96-96b5-25540fcc8797-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.561545 4787 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/76349694-d2a8-4a96-96b5-25540fcc8797-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:38 crc kubenswrapper[4787]: I0126 18:59:38.561556 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z65l\" (UniqueName: \"kubernetes.io/projected/76349694-d2a8-4a96-96b5-25540fcc8797-kube-api-access-4z65l\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:39 crc kubenswrapper[4787]: I0126 18:59:39.043924 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9xxvr" event={"ID":"76349694-d2a8-4a96-96b5-25540fcc8797","Type":"ContainerDied","Data":"c23643198ae17965309dec4c6afb82e9dd1fb6e620b4f5035ebefdff31528296"} Jan 26 18:59:39 crc kubenswrapper[4787]: I0126 18:59:39.044003 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23643198ae17965309dec4c6afb82e9dd1fb6e620b4f5035ebefdff31528296" Jan 26 18:59:39 crc kubenswrapper[4787]: I0126 18:59:39.044029 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9xxvr" Jan 26 18:59:46 crc kubenswrapper[4787]: I0126 18:59:46.808058 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:59:46 crc kubenswrapper[4787]: I0126 18:59:46.808638 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:59:46 crc kubenswrapper[4787]: I0126 18:59:46.808704 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 18:59:46 crc kubenswrapper[4787]: I0126 18:59:46.809375 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:59:46 crc kubenswrapper[4787]: I0126 18:59:46.809444 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" gracePeriod=600 Jan 26 18:59:46 crc kubenswrapper[4787]: E0126 18:59:46.934075 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:59:47 crc kubenswrapper[4787]: I0126 18:59:47.098735 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" exitCode=0 Jan 26 18:59:47 crc kubenswrapper[4787]: I0126 18:59:47.098783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691"} Jan 26 18:59:47 crc kubenswrapper[4787]: I0126 18:59:47.098819 4787 scope.go:117] "RemoveContainer" containerID="e27f3e7902e06e7049cc362e50aab0c2c722204a58d994dc1031a1ebe832c3be" Jan 26 18:59:47 crc kubenswrapper[4787]: I0126 18:59:47.099493 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 18:59:47 crc kubenswrapper[4787]: E0126 18:59:47.099884 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 18:59:57 crc kubenswrapper[4787]: I0126 18:59:57.883172 4787 scope.go:117] "RemoveContainer" containerID="bfceaf62ddadb3718c46dd3c39b2e43c5f054ffec7d750806adeef0fdacd8701" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.156207 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm"] Jan 26 19:00:00 crc kubenswrapper[4787]: E0126 19:00:00.156828 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76349694-d2a8-4a96-96b5-25540fcc8797" containerName="storage" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.156842 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="76349694-d2a8-4a96-96b5-25540fcc8797" containerName="storage" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.157025 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="76349694-d2a8-4a96-96b5-25540fcc8797" containerName="storage" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.157500 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.159658 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.159823 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.163420 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431e05a1-e497-43f1-94b5-af113f71f052-secret-volume\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.163699 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431e05a1-e497-43f1-94b5-af113f71f052-config-volume\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.163819 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjrk\" (UniqueName: \"kubernetes.io/projected/431e05a1-e497-43f1-94b5-af113f71f052-kube-api-access-8fjrk\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.179281 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm"] Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.265264 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431e05a1-e497-43f1-94b5-af113f71f052-config-volume\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.265336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fjrk\" (UniqueName: \"kubernetes.io/projected/431e05a1-e497-43f1-94b5-af113f71f052-kube-api-access-8fjrk\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.265417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431e05a1-e497-43f1-94b5-af113f71f052-secret-volume\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.266316 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431e05a1-e497-43f1-94b5-af113f71f052-config-volume\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.271701 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431e05a1-e497-43f1-94b5-af113f71f052-secret-volume\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.477863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fjrk\" (UniqueName: \"kubernetes.io/projected/431e05a1-e497-43f1-94b5-af113f71f052-kube-api-access-8fjrk\") pod \"collect-profiles-29490900-544pm\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.480092 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:00 crc kubenswrapper[4787]: I0126 19:00:00.962502 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm"] Jan 26 19:00:01 crc kubenswrapper[4787]: I0126 19:00:01.198044 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" event={"ID":"431e05a1-e497-43f1-94b5-af113f71f052","Type":"ContainerStarted","Data":"5ec92f583fb88f4cf35cb77047604b864f6b399041fe114f135e66dd2652f9cf"} Jan 26 19:00:01 crc kubenswrapper[4787]: I0126 19:00:01.601743 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:00:01 crc kubenswrapper[4787]: E0126 19:00:01.602421 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:00:02 crc kubenswrapper[4787]: I0126 19:00:02.206456 4787 generic.go:334] "Generic (PLEG): container finished" podID="431e05a1-e497-43f1-94b5-af113f71f052" containerID="6d472819035d61f35ca2c9bb3ce13e24a58ac5c68f159f2ce0cbd440ed4601da" exitCode=0 Jan 26 19:00:02 crc kubenswrapper[4787]: I0126 19:00:02.206553 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" event={"ID":"431e05a1-e497-43f1-94b5-af113f71f052","Type":"ContainerDied","Data":"6d472819035d61f35ca2c9bb3ce13e24a58ac5c68f159f2ce0cbd440ed4601da"} Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.525041 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.713194 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431e05a1-e497-43f1-94b5-af113f71f052-config-volume\") pod \"431e05a1-e497-43f1-94b5-af113f71f052\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.713436 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fjrk\" (UniqueName: \"kubernetes.io/projected/431e05a1-e497-43f1-94b5-af113f71f052-kube-api-access-8fjrk\") pod \"431e05a1-e497-43f1-94b5-af113f71f052\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.713483 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431e05a1-e497-43f1-94b5-af113f71f052-secret-volume\") pod \"431e05a1-e497-43f1-94b5-af113f71f052\" (UID: \"431e05a1-e497-43f1-94b5-af113f71f052\") " Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.714007 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/431e05a1-e497-43f1-94b5-af113f71f052-config-volume" (OuterVolumeSpecName: "config-volume") pod "431e05a1-e497-43f1-94b5-af113f71f052" (UID: "431e05a1-e497-43f1-94b5-af113f71f052"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.714098 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431e05a1-e497-43f1-94b5-af113f71f052-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.718742 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431e05a1-e497-43f1-94b5-af113f71f052-kube-api-access-8fjrk" (OuterVolumeSpecName: "kube-api-access-8fjrk") pod "431e05a1-e497-43f1-94b5-af113f71f052" (UID: "431e05a1-e497-43f1-94b5-af113f71f052"). InnerVolumeSpecName "kube-api-access-8fjrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.721917 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431e05a1-e497-43f1-94b5-af113f71f052-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "431e05a1-e497-43f1-94b5-af113f71f052" (UID: "431e05a1-e497-43f1-94b5-af113f71f052"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.814884 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fjrk\" (UniqueName: \"kubernetes.io/projected/431e05a1-e497-43f1-94b5-af113f71f052-kube-api-access-8fjrk\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4787]: I0126 19:00:03.815244 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431e05a1-e497-43f1-94b5-af113f71f052-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:04 crc kubenswrapper[4787]: I0126 19:00:04.222602 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" event={"ID":"431e05a1-e497-43f1-94b5-af113f71f052","Type":"ContainerDied","Data":"5ec92f583fb88f4cf35cb77047604b864f6b399041fe114f135e66dd2652f9cf"} Jan 26 19:00:04 crc kubenswrapper[4787]: I0126 19:00:04.222687 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm" Jan 26 19:00:04 crc kubenswrapper[4787]: I0126 19:00:04.222696 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec92f583fb88f4cf35cb77047604b864f6b399041fe114f135e66dd2652f9cf" Jan 26 19:00:04 crc kubenswrapper[4787]: I0126 19:00:04.597774 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6"] Jan 26 19:00:04 crc kubenswrapper[4787]: I0126 19:00:04.602726 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490855-4g7c6"] Jan 26 19:00:05 crc kubenswrapper[4787]: I0126 19:00:05.599876 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14002b44-d9b6-447e-9c7d-2cad3f54515a" path="/var/lib/kubelet/pods/14002b44-d9b6-447e-9c7d-2cad3f54515a/volumes" Jan 26 19:00:15 crc kubenswrapper[4787]: I0126 19:00:15.589672 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:00:15 crc kubenswrapper[4787]: E0126 19:00:15.591247 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:00:29 crc kubenswrapper[4787]: I0126 19:00:29.589826 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:00:29 crc kubenswrapper[4787]: E0126 19:00:29.591087 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:00:41 crc kubenswrapper[4787]: I0126 19:00:41.593204 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:00:41 crc kubenswrapper[4787]: E0126 19:00:41.593729 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:00:54 crc kubenswrapper[4787]: I0126 19:00:54.589466 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:00:54 crc kubenswrapper[4787]: E0126 19:00:54.590283 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:00:57 crc kubenswrapper[4787]: I0126 19:00:57.951840 4787 scope.go:117] "RemoveContainer" containerID="deb1fbf1696f2a237601cf07391e71712b8e3a0377e51a79cc31b4d42f3214ee" Jan 26 19:01:07 crc kubenswrapper[4787]: I0126 19:01:07.589076 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:01:07 crc kubenswrapper[4787]: E0126 19:01:07.589632 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:01:21 crc kubenswrapper[4787]: I0126 19:01:21.602028 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:01:21 crc kubenswrapper[4787]: E0126 19:01:21.602752 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:01:36 crc kubenswrapper[4787]: I0126 19:01:36.589728 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:01:36 crc kubenswrapper[4787]: E0126 19:01:36.590854 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:01:49 crc kubenswrapper[4787]: I0126 19:01:49.589464 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:01:49 crc kubenswrapper[4787]: E0126 19:01:49.590218 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:02:01 crc kubenswrapper[4787]: I0126 19:02:01.597403 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:02:01 crc kubenswrapper[4787]: E0126 19:02:01.598505 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:02:16 crc kubenswrapper[4787]: I0126 19:02:16.589402 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:02:16 crc kubenswrapper[4787]: E0126 19:02:16.590236 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:02:27 crc kubenswrapper[4787]: I0126 19:02:27.589268 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:02:27 crc kubenswrapper[4787]: E0126 19:02:27.590308 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:02:38 crc kubenswrapper[4787]: I0126 19:02:38.589445 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:02:38 crc kubenswrapper[4787]: E0126 19:02:38.590238 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.722871 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hx6bq"] Jan 26 19:02:47 crc kubenswrapper[4787]: E0126 19:02:47.723704 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431e05a1-e497-43f1-94b5-af113f71f052" containerName="collect-profiles" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.723720 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="431e05a1-e497-43f1-94b5-af113f71f052" containerName="collect-profiles" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.723879 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="431e05a1-e497-43f1-94b5-af113f71f052" containerName="collect-profiles" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.727857 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.729927 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.730177 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7srp8" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.730523 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.730658 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.730677 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.741768 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hx6bq"] Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.809899 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-dns-svc\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.810132 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-config\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.810265 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl7nm\" (UniqueName: \"kubernetes.io/projected/0b0a02e9-812e-475f-ae69-c4a8fce2098f-kube-api-access-gl7nm\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.911986 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-dns-svc\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.912035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-config\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.912223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl7nm\" (UniqueName: \"kubernetes.io/projected/0b0a02e9-812e-475f-ae69-c4a8fce2098f-kube-api-access-gl7nm\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.912941 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-tjtn6"] Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.913106 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-dns-svc\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.912952 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-config\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.914560 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.929772 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-tjtn6"] Jan 26 19:02:47 crc kubenswrapper[4787]: I0126 19:02:47.936344 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl7nm\" (UniqueName: \"kubernetes.io/projected/0b0a02e9-812e-475f-ae69-c4a8fce2098f-kube-api-access-gl7nm\") pod \"dnsmasq-dns-95587bc99-hx6bq\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.013818 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj54g\" (UniqueName: \"kubernetes.io/projected/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-kube-api-access-sj54g\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.013869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.013928 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-config\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.050934 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.115875 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj54g\" (UniqueName: \"kubernetes.io/projected/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-kube-api-access-sj54g\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.116217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.116309 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-config\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.117216 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-config\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.121821 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.136829 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj54g\" (UniqueName: \"kubernetes.io/projected/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-kube-api-access-sj54g\") pod \"dnsmasq-dns-5d79f765b5-tjtn6\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.230879 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.293763 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hx6bq"] Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.473147 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" event={"ID":"0b0a02e9-812e-475f-ae69-c4a8fce2098f","Type":"ContainerStarted","Data":"81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70"} Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.473188 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" event={"ID":"0b0a02e9-812e-475f-ae69-c4a8fce2098f","Type":"ContainerStarted","Data":"53c2db56d9337c0018d15807badeda26e43822cd0fa355fc8ceae706b47fbd46"} Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.732910 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-tjtn6"] Jan 26 19:02:48 crc kubenswrapper[4787]: W0126 19:02:48.738129 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8fcea1c_e71b_45b4_8cd7_2b9c1d898e9f.slice/crio-1d066a17c787ad61ec45d142843d4001ee201d4c9f7b19ebdecd58bc2e4d613e WatchSource:0}: Error finding container 1d066a17c787ad61ec45d142843d4001ee201d4c9f7b19ebdecd58bc2e4d613e: Status 404 returned error can't find the container with id 1d066a17c787ad61ec45d142843d4001ee201d4c9f7b19ebdecd58bc2e4d613e Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.813129 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.814834 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.817197 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.818782 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mtjmw" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.819395 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.819640 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.819806 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.821708 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928701 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6nc\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-kube-api-access-ps6nc\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928891 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cda9161-6599-4f76-b5b6-d3644a04059a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928908 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cda9161-6599-4f76-b5b6-d3644a04059a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.928934 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.929057 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:48 crc kubenswrapper[4787]: I0126 19:02:48.929168 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030228 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030259 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030316 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030334 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030377 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6nc\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-kube-api-access-ps6nc\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030405 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cda9161-6599-4f76-b5b6-d3644a04059a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030420 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cda9161-6599-4f76-b5b6-d3644a04059a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.030617 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.031265 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.031753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.032280 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.033639 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.033669 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3375fc6b96b1abae189094e7496b8ac2aba4ff1a6fae2a30145423eddf7041ea/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.034853 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cda9161-6599-4f76-b5b6-d3644a04059a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.035410 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cda9161-6599-4f76-b5b6-d3644a04059a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.035497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.056505 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6nc\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-kube-api-access-ps6nc\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.065839 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.065845 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.067247 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.069174 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.069184 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.069174 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.069858 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.089806 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.090120 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fjsvb" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131566 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131628 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131649 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131699 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131720 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131746 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131769 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxn4\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-kube-api-access-cjxn4\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.131799 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.165297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235276 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235370 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235473 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235496 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxn4\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-kube-api-access-cjxn4\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235525 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.235954 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.236780 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.237488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.238619 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.239161 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.239182 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3d241a176833a9d493a38ae90635dd34e4a08e60336a801336e4b66d8414864/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.243044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.246018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.250842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.264982 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxn4\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-kube-api-access-cjxn4\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.293181 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.427969 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.490568 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerID="29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679" exitCode=0 Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.490641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" event={"ID":"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f","Type":"ContainerDied","Data":"29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679"} Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.490678 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" event={"ID":"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f","Type":"ContainerStarted","Data":"1d066a17c787ad61ec45d142843d4001ee201d4c9f7b19ebdecd58bc2e4d613e"} Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.494916 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerID="81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70" exitCode=0 Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.495013 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" event={"ID":"0b0a02e9-812e-475f-ae69-c4a8fce2098f","Type":"ContainerDied","Data":"81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70"} Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.604623 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:02:49 crc kubenswrapper[4787]: I0126 19:02:49.891638 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:02:49 crc kubenswrapper[4787]: W0126 19:02:49.895496 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92fa88a3_7fe7_434f_a74e_9f9c017f5b88.slice/crio-e2d21a5b7a6dd9a8a4fc4251fa9fed758b4e8f61b41a6b2e628da215e4452993 WatchSource:0}: Error finding container e2d21a5b7a6dd9a8a4fc4251fa9fed758b4e8f61b41a6b2e628da215e4452993: Status 404 returned error can't find the container with id e2d21a5b7a6dd9a8a4fc4251fa9fed758b4e8f61b41a6b2e628da215e4452993 Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.244546 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.246634 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.249239 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.249379 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hhq6b" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.249451 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.251447 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.276702 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.281013 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352039 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f654521f-66fa-4cb5-b058-bfdd66311d5c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352129 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntqvd\" (UniqueName: \"kubernetes.io/projected/f654521f-66fa-4cb5-b058-bfdd66311d5c-kube-api-access-ntqvd\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f654521f-66fa-4cb5-b058-bfdd66311d5c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352223 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f654521f-66fa-4cb5-b058-bfdd66311d5c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352250 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352309 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-kolla-config\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.352334 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-config-data-default\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f654521f-66fa-4cb5-b058-bfdd66311d5c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454186 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454258 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-kolla-config\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454285 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-config-data-default\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454310 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f654521f-66fa-4cb5-b058-bfdd66311d5c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntqvd\" (UniqueName: \"kubernetes.io/projected/f654521f-66fa-4cb5-b058-bfdd66311d5c-kube-api-access-ntqvd\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454400 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f654521f-66fa-4cb5-b058-bfdd66311d5c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.454612 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f654521f-66fa-4cb5-b058-bfdd66311d5c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.455571 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-config-data-default\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.455769 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-kolla-config\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.457261 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f654521f-66fa-4cb5-b058-bfdd66311d5c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.459691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f654521f-66fa-4cb5-b058-bfdd66311d5c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.462836 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f654521f-66fa-4cb5-b058-bfdd66311d5c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.464089 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.464124 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca83fab9e82994bf1f0500df6d32362b7ec6e4a66102384b82640ee05353b6af/globalmount\"" pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.473699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntqvd\" (UniqueName: \"kubernetes.io/projected/f654521f-66fa-4cb5-b058-bfdd66311d5c-kube-api-access-ntqvd\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.503364 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2cda9161-6599-4f76-b5b6-d3644a04059a","Type":"ContainerStarted","Data":"6abe01ebeedcd738744caee8ba8bf1ee25a6d868dce9b4b0ca68b4af557a02ab"} Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.506893 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" event={"ID":"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f","Type":"ContainerStarted","Data":"75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8"} Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.506998 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.508106 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92fa88a3-7fe7-434f-a74e-9f9c017f5b88","Type":"ContainerStarted","Data":"e2d21a5b7a6dd9a8a4fc4251fa9fed758b4e8f61b41a6b2e628da215e4452993"} Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.509931 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" event={"ID":"0b0a02e9-812e-475f-ae69-c4a8fce2098f","Type":"ContainerStarted","Data":"928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016"} Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.510183 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.523253 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" podStartSLOduration=3.523231687 podStartE2EDuration="3.523231687s" podCreationTimestamp="2026-01-26 19:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:50.523032962 +0000 UTC m=+4739.230169095" watchObservedRunningTime="2026-01-26 19:02:50.523231687 +0000 UTC m=+4739.230367820" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.550877 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" podStartSLOduration=3.5508580260000002 podStartE2EDuration="3.550858026s" podCreationTimestamp="2026-01-26 19:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:50.547866853 +0000 UTC m=+4739.255002986" watchObservedRunningTime="2026-01-26 19:02:50.550858026 +0000 UTC m=+4739.257994159" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.605786 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.606668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.609355 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-n6htz" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.610062 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.617938 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.657585 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c766a7-ec1b-4399-a988-70fa15711c4d-config-data\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.657661 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrtf\" (UniqueName: \"kubernetes.io/projected/76c766a7-ec1b-4399-a988-70fa15711c4d-kube-api-access-nfrtf\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.657729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/76c766a7-ec1b-4399-a988-70fa15711c4d-kolla-config\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.696951 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7d1ec20-86fc-49c4-9066-ded337896c53\") pod \"openstack-galera-0\" (UID: \"f654521f-66fa-4cb5-b058-bfdd66311d5c\") " pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.759576 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrtf\" (UniqueName: \"kubernetes.io/projected/76c766a7-ec1b-4399-a988-70fa15711c4d-kube-api-access-nfrtf\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.759634 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/76c766a7-ec1b-4399-a988-70fa15711c4d-kolla-config\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.759726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c766a7-ec1b-4399-a988-70fa15711c4d-config-data\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.760679 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c766a7-ec1b-4399-a988-70fa15711c4d-config-data\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.760705 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/76c766a7-ec1b-4399-a988-70fa15711c4d-kolla-config\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.776900 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrtf\" (UniqueName: \"kubernetes.io/projected/76c766a7-ec1b-4399-a988-70fa15711c4d-kube-api-access-nfrtf\") pod \"memcached-0\" (UID: \"76c766a7-ec1b-4399-a988-70fa15711c4d\") " pod="openstack/memcached-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.869750 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 19:02:50 crc kubenswrapper[4787]: I0126 19:02:50.996828 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.338480 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 19:02:51 crc kubenswrapper[4787]: W0126 19:02:51.346695 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf654521f_66fa_4cb5_b058_bfdd66311d5c.slice/crio-600f199897515ee031d97eea369cc7b6eba78b6f6b8c7f74dedbc608a17a60f2 WatchSource:0}: Error finding container 600f199897515ee031d97eea369cc7b6eba78b6f6b8c7f74dedbc608a17a60f2: Status 404 returned error can't find the container with id 600f199897515ee031d97eea369cc7b6eba78b6f6b8c7f74dedbc608a17a60f2 Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.455876 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.522763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"76c766a7-ec1b-4399-a988-70fa15711c4d","Type":"ContainerStarted","Data":"f31d52e8a0fc220eafd1c63215c428d793d0e095e6c5848a5cdb74f8a65194ea"} Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.526604 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2cda9161-6599-4f76-b5b6-d3644a04059a","Type":"ContainerStarted","Data":"bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5"} Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.529079 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f654521f-66fa-4cb5-b058-bfdd66311d5c","Type":"ContainerStarted","Data":"18fed97cd15d2d7799be1c44a7be2d8a91b1869a7b5ab7bdcebc7be0c69dc5d0"} Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.529148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f654521f-66fa-4cb5-b058-bfdd66311d5c","Type":"ContainerStarted","Data":"600f199897515ee031d97eea369cc7b6eba78b6f6b8c7f74dedbc608a17a60f2"} Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.530651 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92fa88a3-7fe7-434f-a74e-9f9c017f5b88","Type":"ContainerStarted","Data":"443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2"} Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.832070 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.833514 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.835603 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.837343 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.842788 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.845534 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.849267 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hk76w" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.876918 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae41cad-1186-47da-b18b-35613fd332c2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877028 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877076 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ae41cad-1186-47da-b18b-35613fd332c2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877098 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae41cad-1186-47da-b18b-35613fd332c2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877165 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877204 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppgx\" (UniqueName: \"kubernetes.io/projected/8ae41cad-1186-47da-b18b-35613fd332c2-kube-api-access-xppgx\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.877224 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978260 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae41cad-1186-47da-b18b-35613fd332c2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978371 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppgx\" (UniqueName: \"kubernetes.io/projected/8ae41cad-1186-47da-b18b-35613fd332c2-kube-api-access-xppgx\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae41cad-1186-47da-b18b-35613fd332c2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978468 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.978513 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ae41cad-1186-47da-b18b-35613fd332c2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.979307 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.979382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ae41cad-1186-47da-b18b-35613fd332c2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.979450 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.980790 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ae41cad-1186-47da-b18b-35613fd332c2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.981604 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.981653 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f642829be2f1bbb54a3e326304a08e6cfa693079a91375095b9f565277ce6d1f/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.982821 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae41cad-1186-47da-b18b-35613fd332c2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.983613 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae41cad-1186-47da-b18b-35613fd332c2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:51 crc kubenswrapper[4787]: I0126 19:02:51.994725 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppgx\" (UniqueName: \"kubernetes.io/projected/8ae41cad-1186-47da-b18b-35613fd332c2-kube-api-access-xppgx\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:52 crc kubenswrapper[4787]: I0126 19:02:52.009999 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73fe9834-7d68-4b1b-94eb-d06573c04a61\") pod \"openstack-cell1-galera-0\" (UID: \"8ae41cad-1186-47da-b18b-35613fd332c2\") " pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:52 crc kubenswrapper[4787]: I0126 19:02:52.151746 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 19:02:52 crc kubenswrapper[4787]: I0126 19:02:52.539189 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"76c766a7-ec1b-4399-a988-70fa15711c4d","Type":"ContainerStarted","Data":"1abcca150a53e85fefa0e95e84bf3b1bbaec7411e8e80958f5f19174f2b31975"} Jan 26 19:02:52 crc kubenswrapper[4787]: I0126 19:02:52.540186 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 19:02:52 crc kubenswrapper[4787]: I0126 19:02:52.557148 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.557125027 podStartE2EDuration="2.557125027s" podCreationTimestamp="2026-01-26 19:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:52.555535358 +0000 UTC m=+4741.262671511" watchObservedRunningTime="2026-01-26 19:02:52.557125027 +0000 UTC m=+4741.264261170" Jan 26 19:02:52 crc kubenswrapper[4787]: I0126 19:02:52.607382 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 19:02:52 crc kubenswrapper[4787]: W0126 19:02:52.613071 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ae41cad_1186_47da_b18b_35613fd332c2.slice/crio-38b615b23723ad96b54bd5bf74686d70da3b026895c7a8335561d1e6f14832f7 WatchSource:0}: Error finding container 38b615b23723ad96b54bd5bf74686d70da3b026895c7a8335561d1e6f14832f7: Status 404 returned error can't find the container with id 38b615b23723ad96b54bd5bf74686d70da3b026895c7a8335561d1e6f14832f7 Jan 26 19:02:53 crc kubenswrapper[4787]: I0126 19:02:53.551355 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ae41cad-1186-47da-b18b-35613fd332c2","Type":"ContainerStarted","Data":"b2f2c040e35c618f4b3d4660232e90673758626f8259a47e006e2f18275350ea"} Jan 26 19:02:53 crc kubenswrapper[4787]: I0126 19:02:53.551807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ae41cad-1186-47da-b18b-35613fd332c2","Type":"ContainerStarted","Data":"38b615b23723ad96b54bd5bf74686d70da3b026895c7a8335561d1e6f14832f7"} Jan 26 19:02:53 crc kubenswrapper[4787]: I0126 19:02:53.593775 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:02:53 crc kubenswrapper[4787]: E0126 19:02:53.594088 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:02:55 crc kubenswrapper[4787]: I0126 19:02:55.572144 4787 generic.go:334] "Generic (PLEG): container finished" podID="f654521f-66fa-4cb5-b058-bfdd66311d5c" containerID="18fed97cd15d2d7799be1c44a7be2d8a91b1869a7b5ab7bdcebc7be0c69dc5d0" exitCode=0 Jan 26 19:02:55 crc kubenswrapper[4787]: I0126 19:02:55.572238 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f654521f-66fa-4cb5-b058-bfdd66311d5c","Type":"ContainerDied","Data":"18fed97cd15d2d7799be1c44a7be2d8a91b1869a7b5ab7bdcebc7be0c69dc5d0"} Jan 26 19:02:56 crc kubenswrapper[4787]: I0126 19:02:56.589505 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f654521f-66fa-4cb5-b058-bfdd66311d5c","Type":"ContainerStarted","Data":"798fe6a5b3e5f27951322ff8c485685de276773a89f574025c47b599fb926c16"} Jan 26 19:02:56 crc kubenswrapper[4787]: I0126 19:02:56.592206 4787 generic.go:334] "Generic (PLEG): container finished" podID="8ae41cad-1186-47da-b18b-35613fd332c2" containerID="b2f2c040e35c618f4b3d4660232e90673758626f8259a47e006e2f18275350ea" exitCode=0 Jan 26 19:02:56 crc kubenswrapper[4787]: I0126 19:02:56.592268 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ae41cad-1186-47da-b18b-35613fd332c2","Type":"ContainerDied","Data":"b2f2c040e35c618f4b3d4660232e90673758626f8259a47e006e2f18275350ea"} Jan 26 19:02:56 crc kubenswrapper[4787]: I0126 19:02:56.625785 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.625752258 podStartE2EDuration="7.625752258s" podCreationTimestamp="2026-01-26 19:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:56.619077814 +0000 UTC m=+4745.326213957" watchObservedRunningTime="2026-01-26 19:02:56.625752258 +0000 UTC m=+4745.332888431" Jan 26 19:02:57 crc kubenswrapper[4787]: I0126 19:02:57.603846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ae41cad-1186-47da-b18b-35613fd332c2","Type":"ContainerStarted","Data":"64477371bd52a73005c1fa1abac0a6e1a28ee22dc0f638426c8cee907cd507f2"} Jan 26 19:02:57 crc kubenswrapper[4787]: I0126 19:02:57.633762 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.6337205059999995 podStartE2EDuration="7.633720506s" podCreationTimestamp="2026-01-26 19:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:57.63343422 +0000 UTC m=+4746.340570343" watchObservedRunningTime="2026-01-26 19:02:57.633720506 +0000 UTC m=+4746.340856639" Jan 26 19:02:58 crc kubenswrapper[4787]: I0126 19:02:58.052347 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:58 crc kubenswrapper[4787]: I0126 19:02:58.232109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:02:58 crc kubenswrapper[4787]: I0126 19:02:58.289100 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hx6bq"] Jan 26 19:02:58 crc kubenswrapper[4787]: I0126 19:02:58.609242 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerName="dnsmasq-dns" containerID="cri-o://928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016" gracePeriod=10 Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.512152 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.604602 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl7nm\" (UniqueName: \"kubernetes.io/projected/0b0a02e9-812e-475f-ae69-c4a8fce2098f-kube-api-access-gl7nm\") pod \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.604692 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-dns-svc\") pod \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.604734 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-config\") pod \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\" (UID: \"0b0a02e9-812e-475f-ae69-c4a8fce2098f\") " Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.618401 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0a02e9-812e-475f-ae69-c4a8fce2098f-kube-api-access-gl7nm" (OuterVolumeSpecName: "kube-api-access-gl7nm") pod "0b0a02e9-812e-475f-ae69-c4a8fce2098f" (UID: "0b0a02e9-812e-475f-ae69-c4a8fce2098f"). InnerVolumeSpecName "kube-api-access-gl7nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.626884 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerID="928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016" exitCode=0 Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.626921 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" event={"ID":"0b0a02e9-812e-475f-ae69-c4a8fce2098f","Type":"ContainerDied","Data":"928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016"} Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.626961 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" event={"ID":"0b0a02e9-812e-475f-ae69-c4a8fce2098f","Type":"ContainerDied","Data":"53c2db56d9337c0018d15807badeda26e43822cd0fa355fc8ceae706b47fbd46"} Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.626977 4787 scope.go:117] "RemoveContainer" containerID="928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.627070 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hx6bq" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.649049 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-config" (OuterVolumeSpecName: "config") pod "0b0a02e9-812e-475f-ae69-c4a8fce2098f" (UID: "0b0a02e9-812e-475f-ae69-c4a8fce2098f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.659497 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b0a02e9-812e-475f-ae69-c4a8fce2098f" (UID: "0b0a02e9-812e-475f-ae69-c4a8fce2098f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.693232 4787 scope.go:117] "RemoveContainer" containerID="81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.706857 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl7nm\" (UniqueName: \"kubernetes.io/projected/0b0a02e9-812e-475f-ae69-c4a8fce2098f-kube-api-access-gl7nm\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.706886 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.706897 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0a02e9-812e-475f-ae69-c4a8fce2098f-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.713028 4787 scope.go:117] "RemoveContainer" containerID="928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016" Jan 26 19:02:59 crc kubenswrapper[4787]: E0126 19:02:59.713610 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016\": container with ID starting with 928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016 not found: ID does not exist" containerID="928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.713640 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016"} err="failed to get container status \"928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016\": rpc error: code = NotFound desc = could not find container \"928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016\": container with ID starting with 928e618ad9117fa6c8bed3967100b54ddfa77b6292840d0cbb5d14a78c1c0016 not found: ID does not exist" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.713659 4787 scope.go:117] "RemoveContainer" containerID="81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70" Jan 26 19:02:59 crc kubenswrapper[4787]: E0126 19:02:59.713939 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70\": container with ID starting with 81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70 not found: ID does not exist" containerID="81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.714007 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70"} err="failed to get container status \"81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70\": rpc error: code = NotFound desc = could not find container \"81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70\": container with ID starting with 81eae5413669677f151370d67e61a73dc64ca994f96ff57ab6e0fcc9aad5cf70 not found: ID does not exist" Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.956521 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hx6bq"] Jan 26 19:02:59 crc kubenswrapper[4787]: I0126 19:02:59.962597 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hx6bq"] Jan 26 19:03:00 crc kubenswrapper[4787]: I0126 19:03:00.871019 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 19:03:00 crc kubenswrapper[4787]: I0126 19:03:00.871085 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 19:03:00 crc kubenswrapper[4787]: I0126 19:03:00.989585 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 19:03:00 crc kubenswrapper[4787]: I0126 19:03:00.998706 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 19:03:01 crc kubenswrapper[4787]: I0126 19:03:01.598245 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" path="/var/lib/kubelet/pods/0b0a02e9-812e-475f-ae69-c4a8fce2098f/volumes" Jan 26 19:03:01 crc kubenswrapper[4787]: I0126 19:03:01.738943 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 19:03:02 crc kubenswrapper[4787]: I0126 19:03:02.152608 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 19:03:02 crc kubenswrapper[4787]: I0126 19:03:02.152665 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 19:03:04 crc kubenswrapper[4787]: I0126 19:03:04.889347 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 19:03:04 crc kubenswrapper[4787]: I0126 19:03:04.981012 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 19:03:05 crc kubenswrapper[4787]: I0126 19:03:05.593112 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:03:05 crc kubenswrapper[4787]: E0126 19:03:05.593390 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.210765 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mzfhs"] Jan 26 19:03:09 crc kubenswrapper[4787]: E0126 19:03:09.211631 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerName="dnsmasq-dns" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.211659 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerName="dnsmasq-dns" Jan 26 19:03:09 crc kubenswrapper[4787]: E0126 19:03:09.211693 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerName="init" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.211707 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerName="init" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.212073 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0a02e9-812e-475f-ae69-c4a8fce2098f" containerName="dnsmasq-dns" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.213194 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.216079 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.219342 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzfhs"] Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.272514 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffbd1b0f-6a56-478b-af3b-475e978a4489-operator-scripts\") pod \"root-account-create-update-mzfhs\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.272597 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljwf\" (UniqueName: \"kubernetes.io/projected/ffbd1b0f-6a56-478b-af3b-475e978a4489-kube-api-access-dljwf\") pod \"root-account-create-update-mzfhs\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.373502 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljwf\" (UniqueName: \"kubernetes.io/projected/ffbd1b0f-6a56-478b-af3b-475e978a4489-kube-api-access-dljwf\") pod \"root-account-create-update-mzfhs\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.373613 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffbd1b0f-6a56-478b-af3b-475e978a4489-operator-scripts\") pod \"root-account-create-update-mzfhs\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.374372 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffbd1b0f-6a56-478b-af3b-475e978a4489-operator-scripts\") pod \"root-account-create-update-mzfhs\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.394601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljwf\" (UniqueName: \"kubernetes.io/projected/ffbd1b0f-6a56-478b-af3b-475e978a4489-kube-api-access-dljwf\") pod \"root-account-create-update-mzfhs\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:09 crc kubenswrapper[4787]: I0126 19:03:09.545089 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:10 crc kubenswrapper[4787]: I0126 19:03:10.070190 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzfhs"] Jan 26 19:03:10 crc kubenswrapper[4787]: W0126 19:03:10.073909 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffbd1b0f_6a56_478b_af3b_475e978a4489.slice/crio-7434246832dc2c409b97c519ab5c50799af2461ad38ecd23e3cff9333a8e50d0 WatchSource:0}: Error finding container 7434246832dc2c409b97c519ab5c50799af2461ad38ecd23e3cff9333a8e50d0: Status 404 returned error can't find the container with id 7434246832dc2c409b97c519ab5c50799af2461ad38ecd23e3cff9333a8e50d0 Jan 26 19:03:10 crc kubenswrapper[4787]: I0126 19:03:10.726229 4787 generic.go:334] "Generic (PLEG): container finished" podID="ffbd1b0f-6a56-478b-af3b-475e978a4489" containerID="f8a53801702323235138bdba325eed37fb51247a43bc771e70d9e48be5c89e1d" exitCode=0 Jan 26 19:03:10 crc kubenswrapper[4787]: I0126 19:03:10.726292 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzfhs" event={"ID":"ffbd1b0f-6a56-478b-af3b-475e978a4489","Type":"ContainerDied","Data":"f8a53801702323235138bdba325eed37fb51247a43bc771e70d9e48be5c89e1d"} Jan 26 19:03:10 crc kubenswrapper[4787]: I0126 19:03:10.726340 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzfhs" event={"ID":"ffbd1b0f-6a56-478b-af3b-475e978a4489","Type":"ContainerStarted","Data":"7434246832dc2c409b97c519ab5c50799af2461ad38ecd23e3cff9333a8e50d0"} Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.078435 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.117674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brh2s"] Jan 26 19:03:12 crc kubenswrapper[4787]: E0126 19:03:12.118083 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbd1b0f-6a56-478b-af3b-475e978a4489" containerName="mariadb-account-create-update" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.118106 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbd1b0f-6a56-478b-af3b-475e978a4489" containerName="mariadb-account-create-update" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.118289 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffbd1b0f-6a56-478b-af3b-475e978a4489" containerName="mariadb-account-create-update" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.120596 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.143990 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brh2s"] Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.219832 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljwf\" (UniqueName: \"kubernetes.io/projected/ffbd1b0f-6a56-478b-af3b-475e978a4489-kube-api-access-dljwf\") pod \"ffbd1b0f-6a56-478b-af3b-475e978a4489\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.219901 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffbd1b0f-6a56-478b-af3b-475e978a4489-operator-scripts\") pod \"ffbd1b0f-6a56-478b-af3b-475e978a4489\" (UID: \"ffbd1b0f-6a56-478b-af3b-475e978a4489\") " Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.220170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-utilities\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.220257 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-catalog-content\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.220373 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hzg\" (UniqueName: \"kubernetes.io/projected/9990de46-d41c-44ba-a23b-2f8e6e8dff13-kube-api-access-f5hzg\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.220681 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffbd1b0f-6a56-478b-af3b-475e978a4489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffbd1b0f-6a56-478b-af3b-475e978a4489" (UID: "ffbd1b0f-6a56-478b-af3b-475e978a4489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.227746 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffbd1b0f-6a56-478b-af3b-475e978a4489-kube-api-access-dljwf" (OuterVolumeSpecName: "kube-api-access-dljwf") pod "ffbd1b0f-6a56-478b-af3b-475e978a4489" (UID: "ffbd1b0f-6a56-478b-af3b-475e978a4489"). InnerVolumeSpecName "kube-api-access-dljwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.321890 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-utilities\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.322066 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-catalog-content\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.322099 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hzg\" (UniqueName: \"kubernetes.io/projected/9990de46-d41c-44ba-a23b-2f8e6e8dff13-kube-api-access-f5hzg\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.322187 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljwf\" (UniqueName: \"kubernetes.io/projected/ffbd1b0f-6a56-478b-af3b-475e978a4489-kube-api-access-dljwf\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.322202 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffbd1b0f-6a56-478b-af3b-475e978a4489-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.322415 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-utilities\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.322574 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-catalog-content\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.342090 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hzg\" (UniqueName: \"kubernetes.io/projected/9990de46-d41c-44ba-a23b-2f8e6e8dff13-kube-api-access-f5hzg\") pod \"community-operators-brh2s\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.440159 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.741161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzfhs" event={"ID":"ffbd1b0f-6a56-478b-af3b-475e978a4489","Type":"ContainerDied","Data":"7434246832dc2c409b97c519ab5c50799af2461ad38ecd23e3cff9333a8e50d0"} Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.741197 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7434246832dc2c409b97c519ab5c50799af2461ad38ecd23e3cff9333a8e50d0" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.741236 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzfhs" Jan 26 19:03:12 crc kubenswrapper[4787]: I0126 19:03:12.836392 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brh2s"] Jan 26 19:03:13 crc kubenswrapper[4787]: I0126 19:03:13.751064 4787 generic.go:334] "Generic (PLEG): container finished" podID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerID="6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861" exitCode=0 Jan 26 19:03:13 crc kubenswrapper[4787]: I0126 19:03:13.751141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerDied","Data":"6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861"} Jan 26 19:03:13 crc kubenswrapper[4787]: I0126 19:03:13.751181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerStarted","Data":"206626c1c7108993f7302d236e335ca63d53f0385031b6eb4344ff3ea8df4afd"} Jan 26 19:03:14 crc kubenswrapper[4787]: I0126 19:03:14.760287 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerStarted","Data":"24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6"} Jan 26 19:03:15 crc kubenswrapper[4787]: I0126 19:03:15.793483 4787 generic.go:334] "Generic (PLEG): container finished" podID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerID="24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6" exitCode=0 Jan 26 19:03:15 crc kubenswrapper[4787]: I0126 19:03:15.793894 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerDied","Data":"24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6"} Jan 26 19:03:15 crc kubenswrapper[4787]: I0126 19:03:15.823621 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mzfhs"] Jan 26 19:03:15 crc kubenswrapper[4787]: I0126 19:03:15.828893 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mzfhs"] Jan 26 19:03:16 crc kubenswrapper[4787]: I0126 19:03:16.802395 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerStarted","Data":"2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d"} Jan 26 19:03:16 crc kubenswrapper[4787]: I0126 19:03:16.821770 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brh2s" podStartSLOduration=2.355384394 podStartE2EDuration="4.821738236s" podCreationTimestamp="2026-01-26 19:03:12 +0000 UTC" firstStartedPulling="2026-01-26 19:03:13.753720293 +0000 UTC m=+4762.460856426" lastFinishedPulling="2026-01-26 19:03:16.220074075 +0000 UTC m=+4764.927210268" observedRunningTime="2026-01-26 19:03:16.821269025 +0000 UTC m=+4765.528405178" watchObservedRunningTime="2026-01-26 19:03:16.821738236 +0000 UTC m=+4765.528874369" Jan 26 19:03:17 crc kubenswrapper[4787]: I0126 19:03:17.611524 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffbd1b0f-6a56-478b-af3b-475e978a4489" path="/var/lib/kubelet/pods/ffbd1b0f-6a56-478b-af3b-475e978a4489/volumes" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.491583 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pgx2l"] Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.493867 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.499883 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgx2l"] Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.589757 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:03:19 crc kubenswrapper[4787]: E0126 19:03:19.590055 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.645385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-catalog-content\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.645542 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-utilities\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.645574 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs2k2\" (UniqueName: \"kubernetes.io/projected/a4623ed3-7dd7-49d9-b43d-937561d5550d-kube-api-access-zs2k2\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.746616 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-catalog-content\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.746711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-utilities\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.746742 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs2k2\" (UniqueName: \"kubernetes.io/projected/a4623ed3-7dd7-49d9-b43d-937561d5550d-kube-api-access-zs2k2\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.747453 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-catalog-content\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.747676 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-utilities\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.766288 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs2k2\" (UniqueName: \"kubernetes.io/projected/a4623ed3-7dd7-49d9-b43d-937561d5550d-kube-api-access-zs2k2\") pod \"redhat-marketplace-pgx2l\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:19 crc kubenswrapper[4787]: I0126 19:03:19.822645 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.278869 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgx2l"] Jan 26 19:03:20 crc kubenswrapper[4787]: W0126 19:03:20.289478 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4623ed3_7dd7_49d9_b43d_937561d5550d.slice/crio-289d6a39ad22c290a18437b7d69526ae199809c37befa2050dbde166263e0d39 WatchSource:0}: Error finding container 289d6a39ad22c290a18437b7d69526ae199809c37befa2050dbde166263e0d39: Status 404 returned error can't find the container with id 289d6a39ad22c290a18437b7d69526ae199809c37befa2050dbde166263e0d39 Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.835101 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kr22b"] Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.836382 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.840815 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.842823 4787 generic.go:334] "Generic (PLEG): container finished" podID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerID="2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b" exitCode=0 Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.842879 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerDied","Data":"2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b"} Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.842907 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerStarted","Data":"289d6a39ad22c290a18437b7d69526ae199809c37befa2050dbde166263e0d39"} Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.848520 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kr22b"] Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.970824 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjltt\" (UniqueName: \"kubernetes.io/projected/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-kube-api-access-xjltt\") pod \"root-account-create-update-kr22b\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:20 crc kubenswrapper[4787]: I0126 19:03:20.971024 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-operator-scripts\") pod \"root-account-create-update-kr22b\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.072362 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-operator-scripts\") pod \"root-account-create-update-kr22b\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.072461 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjltt\" (UniqueName: \"kubernetes.io/projected/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-kube-api-access-xjltt\") pod \"root-account-create-update-kr22b\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.073180 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-operator-scripts\") pod \"root-account-create-update-kr22b\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.094611 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjltt\" (UniqueName: \"kubernetes.io/projected/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-kube-api-access-xjltt\") pod \"root-account-create-update-kr22b\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.169134 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.623393 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kr22b"] Jan 26 19:03:21 crc kubenswrapper[4787]: W0126 19:03:21.628185 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bf1cf16_fa6e_453d_8b40_55fa345eeb7a.slice/crio-76e91a5fea4e96acc0cf47f24ffd417de55c5bd094016b75c5c38fd75aea0086 WatchSource:0}: Error finding container 76e91a5fea4e96acc0cf47f24ffd417de55c5bd094016b75c5c38fd75aea0086: Status 404 returned error can't find the container with id 76e91a5fea4e96acc0cf47f24ffd417de55c5bd094016b75c5c38fd75aea0086 Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.853602 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kr22b" event={"ID":"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a","Type":"ContainerStarted","Data":"96475a93261e7f0cef713f1e76bca10ff79a163748872a4cf5f08f7623454c36"} Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.854401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kr22b" event={"ID":"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a","Type":"ContainerStarted","Data":"76e91a5fea4e96acc0cf47f24ffd417de55c5bd094016b75c5c38fd75aea0086"} Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.856878 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerStarted","Data":"0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd"} Jan 26 19:03:21 crc kubenswrapper[4787]: I0126 19:03:21.871581 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kr22b" podStartSLOduration=1.8715638380000001 podStartE2EDuration="1.871563838s" podCreationTimestamp="2026-01-26 19:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:21.86839827 +0000 UTC m=+4770.575534413" watchObservedRunningTime="2026-01-26 19:03:21.871563838 +0000 UTC m=+4770.578699981" Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.440790 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.440897 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.482225 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.867912 4787 generic.go:334] "Generic (PLEG): container finished" podID="0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" containerID="96475a93261e7f0cef713f1e76bca10ff79a163748872a4cf5f08f7623454c36" exitCode=0 Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.868019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kr22b" event={"ID":"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a","Type":"ContainerDied","Data":"96475a93261e7f0cef713f1e76bca10ff79a163748872a4cf5f08f7623454c36"} Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.870766 4787 generic.go:334] "Generic (PLEG): container finished" podID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerID="443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2" exitCode=0 Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.870866 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92fa88a3-7fe7-434f-a74e-9f9c017f5b88","Type":"ContainerDied","Data":"443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2"} Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.874345 4787 generic.go:334] "Generic (PLEG): container finished" podID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerID="bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5" exitCode=0 Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.874451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2cda9161-6599-4f76-b5b6-d3644a04059a","Type":"ContainerDied","Data":"bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5"} Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.880846 4787 generic.go:334] "Generic (PLEG): container finished" podID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerID="0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd" exitCode=0 Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.881037 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerDied","Data":"0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd"} Jan 26 19:03:22 crc kubenswrapper[4787]: I0126 19:03:22.955687 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.890908 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92fa88a3-7fe7-434f-a74e-9f9c017f5b88","Type":"ContainerStarted","Data":"258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648"} Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.891385 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.892723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2cda9161-6599-4f76-b5b6-d3644a04059a","Type":"ContainerStarted","Data":"d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d"} Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.892927 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.894604 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerStarted","Data":"83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7"} Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.919982 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.919943934 podStartE2EDuration="35.919943934s" podCreationTimestamp="2026-01-26 19:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:23.916762087 +0000 UTC m=+4772.623898220" watchObservedRunningTime="2026-01-26 19:03:23.919943934 +0000 UTC m=+4772.627080067" Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.972113 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.972097087 podStartE2EDuration="36.972097087s" podCreationTimestamp="2026-01-26 19:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:23.947333278 +0000 UTC m=+4772.654469421" watchObservedRunningTime="2026-01-26 19:03:23.972097087 +0000 UTC m=+4772.679233220" Jan 26 19:03:23 crc kubenswrapper[4787]: I0126 19:03:23.973507 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pgx2l" podStartSLOduration=2.494898668 podStartE2EDuration="4.973500062s" podCreationTimestamp="2026-01-26 19:03:19 +0000 UTC" firstStartedPulling="2026-01-26 19:03:20.84493099 +0000 UTC m=+4769.552067163" lastFinishedPulling="2026-01-26 19:03:23.323532424 +0000 UTC m=+4772.030668557" observedRunningTime="2026-01-26 19:03:23.967675379 +0000 UTC m=+4772.674811512" watchObservedRunningTime="2026-01-26 19:03:23.973500062 +0000 UTC m=+4772.680636195" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.172496 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.222461 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjltt\" (UniqueName: \"kubernetes.io/projected/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-kube-api-access-xjltt\") pod \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.222557 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-operator-scripts\") pod \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\" (UID: \"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a\") " Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.223112 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" (UID: "0bf1cf16-fa6e-453d-8b40-55fa345eeb7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.228342 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-kube-api-access-xjltt" (OuterVolumeSpecName: "kube-api-access-xjltt") pod "0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" (UID: "0bf1cf16-fa6e-453d-8b40-55fa345eeb7a"). InnerVolumeSpecName "kube-api-access-xjltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.323843 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjltt\" (UniqueName: \"kubernetes.io/projected/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-kube-api-access-xjltt\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.323872 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.873433 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brh2s"] Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.904591 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kr22b" event={"ID":"0bf1cf16-fa6e-453d-8b40-55fa345eeb7a","Type":"ContainerDied","Data":"76e91a5fea4e96acc0cf47f24ffd417de55c5bd094016b75c5c38fd75aea0086"} Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.904638 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76e91a5fea4e96acc0cf47f24ffd417de55c5bd094016b75c5c38fd75aea0086" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.904607 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kr22b" Jan 26 19:03:24 crc kubenswrapper[4787]: I0126 19:03:24.904732 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brh2s" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="registry-server" containerID="cri-o://2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d" gracePeriod=2 Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.795820 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.842164 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-catalog-content\") pod \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.842305 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5hzg\" (UniqueName: \"kubernetes.io/projected/9990de46-d41c-44ba-a23b-2f8e6e8dff13-kube-api-access-f5hzg\") pod \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.842353 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-utilities\") pod \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\" (UID: \"9990de46-d41c-44ba-a23b-2f8e6e8dff13\") " Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.843099 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-utilities" (OuterVolumeSpecName: "utilities") pod "9990de46-d41c-44ba-a23b-2f8e6e8dff13" (UID: "9990de46-d41c-44ba-a23b-2f8e6e8dff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.848230 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9990de46-d41c-44ba-a23b-2f8e6e8dff13-kube-api-access-f5hzg" (OuterVolumeSpecName: "kube-api-access-f5hzg") pod "9990de46-d41c-44ba-a23b-2f8e6e8dff13" (UID: "9990de46-d41c-44ba-a23b-2f8e6e8dff13"). InnerVolumeSpecName "kube-api-access-f5hzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.901855 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9990de46-d41c-44ba-a23b-2f8e6e8dff13" (UID: "9990de46-d41c-44ba-a23b-2f8e6e8dff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.913191 4787 generic.go:334] "Generic (PLEG): container finished" podID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerID="2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d" exitCode=0 Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.913232 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerDied","Data":"2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d"} Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.913256 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brh2s" event={"ID":"9990de46-d41c-44ba-a23b-2f8e6e8dff13","Type":"ContainerDied","Data":"206626c1c7108993f7302d236e335ca63d53f0385031b6eb4344ff3ea8df4afd"} Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.913273 4787 scope.go:117] "RemoveContainer" containerID="2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.913383 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brh2s" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.944615 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.944664 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5hzg\" (UniqueName: \"kubernetes.io/projected/9990de46-d41c-44ba-a23b-2f8e6e8dff13-kube-api-access-f5hzg\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.944676 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990de46-d41c-44ba-a23b-2f8e6e8dff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.949916 4787 scope.go:117] "RemoveContainer" containerID="24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.954154 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brh2s"] Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.960073 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brh2s"] Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.968509 4787 scope.go:117] "RemoveContainer" containerID="6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.998707 4787 scope.go:117] "RemoveContainer" containerID="2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d" Jan 26 19:03:25 crc kubenswrapper[4787]: E0126 19:03:25.999158 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d\": container with ID starting with 2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d not found: ID does not exist" containerID="2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.999201 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d"} err="failed to get container status \"2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d\": rpc error: code = NotFound desc = could not find container \"2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d\": container with ID starting with 2416c1f9e4105bc972510d6c4919075b77c50ea245c08f3aef82a1e0952efc8d not found: ID does not exist" Jan 26 19:03:25 crc kubenswrapper[4787]: I0126 19:03:25.999232 4787 scope.go:117] "RemoveContainer" containerID="24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6" Jan 26 19:03:26 crc kubenswrapper[4787]: E0126 19:03:26.002174 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6\": container with ID starting with 24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6 not found: ID does not exist" containerID="24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6" Jan 26 19:03:26 crc kubenswrapper[4787]: I0126 19:03:26.002212 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6"} err="failed to get container status \"24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6\": rpc error: code = NotFound desc = could not find container \"24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6\": container with ID starting with 24e09c3ff095a934ce8b5b0d63c171a89f3d151a65c5f829f7d62cbb17b4c1c6 not found: ID does not exist" Jan 26 19:03:26 crc kubenswrapper[4787]: I0126 19:03:26.002242 4787 scope.go:117] "RemoveContainer" containerID="6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861" Jan 26 19:03:26 crc kubenswrapper[4787]: E0126 19:03:26.004784 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861\": container with ID starting with 6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861 not found: ID does not exist" containerID="6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861" Jan 26 19:03:26 crc kubenswrapper[4787]: I0126 19:03:26.004825 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861"} err="failed to get container status \"6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861\": rpc error: code = NotFound desc = could not find container \"6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861\": container with ID starting with 6245a80c0f446bb941235421cea9f4215bcff1a065211165e566f7aa784f3861 not found: ID does not exist" Jan 26 19:03:27 crc kubenswrapper[4787]: I0126 19:03:27.598508 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" path="/var/lib/kubelet/pods/9990de46-d41c-44ba-a23b-2f8e6e8dff13/volumes" Jan 26 19:03:29 crc kubenswrapper[4787]: I0126 19:03:29.823074 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:29 crc kubenswrapper[4787]: I0126 19:03:29.823536 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:29 crc kubenswrapper[4787]: I0126 19:03:29.885621 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:29 crc kubenswrapper[4787]: I0126 19:03:29.996180 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:30 crc kubenswrapper[4787]: I0126 19:03:30.120755 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgx2l"] Jan 26 19:03:31 crc kubenswrapper[4787]: I0126 19:03:31.959563 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pgx2l" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="registry-server" containerID="cri-o://83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7" gracePeriod=2 Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.413082 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.540996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-utilities\") pod \"a4623ed3-7dd7-49d9-b43d-937561d5550d\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.541104 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs2k2\" (UniqueName: \"kubernetes.io/projected/a4623ed3-7dd7-49d9-b43d-937561d5550d-kube-api-access-zs2k2\") pod \"a4623ed3-7dd7-49d9-b43d-937561d5550d\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.541155 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-catalog-content\") pod \"a4623ed3-7dd7-49d9-b43d-937561d5550d\" (UID: \"a4623ed3-7dd7-49d9-b43d-937561d5550d\") " Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.550370 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4623ed3-7dd7-49d9-b43d-937561d5550d-kube-api-access-zs2k2" (OuterVolumeSpecName: "kube-api-access-zs2k2") pod "a4623ed3-7dd7-49d9-b43d-937561d5550d" (UID: "a4623ed3-7dd7-49d9-b43d-937561d5550d"). InnerVolumeSpecName "kube-api-access-zs2k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.560761 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-utilities" (OuterVolumeSpecName: "utilities") pod "a4623ed3-7dd7-49d9-b43d-937561d5550d" (UID: "a4623ed3-7dd7-49d9-b43d-937561d5550d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.571113 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4623ed3-7dd7-49d9-b43d-937561d5550d" (UID: "a4623ed3-7dd7-49d9-b43d-937561d5550d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.643482 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.643539 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs2k2\" (UniqueName: \"kubernetes.io/projected/a4623ed3-7dd7-49d9-b43d-937561d5550d-kube-api-access-zs2k2\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.643555 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4623ed3-7dd7-49d9-b43d-937561d5550d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.968437 4787 generic.go:334] "Generic (PLEG): container finished" podID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerID="83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7" exitCode=0 Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.968489 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerDied","Data":"83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7"} Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.968529 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgx2l" event={"ID":"a4623ed3-7dd7-49d9-b43d-937561d5550d","Type":"ContainerDied","Data":"289d6a39ad22c290a18437b7d69526ae199809c37befa2050dbde166263e0d39"} Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.968552 4787 scope.go:117] "RemoveContainer" containerID="83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.968494 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgx2l" Jan 26 19:03:32 crc kubenswrapper[4787]: I0126 19:03:32.987227 4787 scope.go:117] "RemoveContainer" containerID="0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.005644 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgx2l"] Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.013392 4787 scope.go:117] "RemoveContainer" containerID="2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.016693 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgx2l"] Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.040827 4787 scope.go:117] "RemoveContainer" containerID="83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7" Jan 26 19:03:33 crc kubenswrapper[4787]: E0126 19:03:33.041353 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7\": container with ID starting with 83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7 not found: ID does not exist" containerID="83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.041395 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7"} err="failed to get container status \"83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7\": rpc error: code = NotFound desc = could not find container \"83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7\": container with ID starting with 83105e27816b6397d8ed313c792f0764f8472569cf4a93bf5cd718a3c881cfd7 not found: ID does not exist" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.041425 4787 scope.go:117] "RemoveContainer" containerID="0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd" Jan 26 19:03:33 crc kubenswrapper[4787]: E0126 19:03:33.041741 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd\": container with ID starting with 0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd not found: ID does not exist" containerID="0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.041774 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd"} err="failed to get container status \"0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd\": rpc error: code = NotFound desc = could not find container \"0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd\": container with ID starting with 0ff189ce21be7b6ae673932cb52fe0f44448d2201c41e34573ef7d8c8bba7cfd not found: ID does not exist" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.041794 4787 scope.go:117] "RemoveContainer" containerID="2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b" Jan 26 19:03:33 crc kubenswrapper[4787]: E0126 19:03:33.042090 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b\": container with ID starting with 2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b not found: ID does not exist" containerID="2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.042119 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b"} err="failed to get container status \"2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b\": rpc error: code = NotFound desc = could not find container \"2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b\": container with ID starting with 2fd64a4f189a29dfd22425fad44d8de330474d8c8182cadd6c21ba40095a428b not found: ID does not exist" Jan 26 19:03:33 crc kubenswrapper[4787]: I0126 19:03:33.600089 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" path="/var/lib/kubelet/pods/a4623ed3-7dd7-49d9-b43d-937561d5550d/volumes" Jan 26 19:03:34 crc kubenswrapper[4787]: I0126 19:03:34.589809 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:03:34 crc kubenswrapper[4787]: E0126 19:03:34.590217 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:03:39 crc kubenswrapper[4787]: I0126 19:03:39.169229 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 19:03:39 crc kubenswrapper[4787]: I0126 19:03:39.430180 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658202 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-rfgzb"] Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658744 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="extract-content" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658756 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="extract-content" Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658769 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="extract-utilities" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658776 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="extract-utilities" Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658789 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="registry-server" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658795 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="registry-server" Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658802 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" containerName="mariadb-account-create-update" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658807 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" containerName="mariadb-account-create-update" Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658814 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="extract-content" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658819 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="extract-content" Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658829 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="registry-server" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658835 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="registry-server" Jan 26 19:03:43 crc kubenswrapper[4787]: E0126 19:03:43.658843 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="extract-utilities" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.658849 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="extract-utilities" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.659008 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" containerName="mariadb-account-create-update" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.659023 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9990de46-d41c-44ba-a23b-2f8e6e8dff13" containerName="registry-server" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.659032 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4623ed3-7dd7-49d9-b43d-937561d5550d" containerName="registry-server" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.659733 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.673634 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-rfgzb"] Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.819647 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-dns-svc\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.819709 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzvn\" (UniqueName: \"kubernetes.io/projected/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-kube-api-access-mrzvn\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.819735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-config\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.921103 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-dns-svc\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.921158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzvn\" (UniqueName: \"kubernetes.io/projected/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-kube-api-access-mrzvn\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.921174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-config\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.922311 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-config\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.922312 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-dns-svc\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:43 crc kubenswrapper[4787]: I0126 19:03:43.950192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzvn\" (UniqueName: \"kubernetes.io/projected/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-kube-api-access-mrzvn\") pod \"dnsmasq-dns-699964fbc-rfgzb\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:44 crc kubenswrapper[4787]: I0126 19:03:44.031450 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:44 crc kubenswrapper[4787]: I0126 19:03:44.419956 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:03:44 crc kubenswrapper[4787]: I0126 19:03:44.505917 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-rfgzb"] Jan 26 19:03:45 crc kubenswrapper[4787]: I0126 19:03:45.024490 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:03:45 crc kubenswrapper[4787]: I0126 19:03:45.070422 4787 generic.go:334] "Generic (PLEG): container finished" podID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerID="7a174cb039ee372d9827c857b836be7da4fc0eefaea7c774ae9c7e6e5b63380c" exitCode=0 Jan 26 19:03:45 crc kubenswrapper[4787]: I0126 19:03:45.070577 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" event={"ID":"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f","Type":"ContainerDied","Data":"7a174cb039ee372d9827c857b836be7da4fc0eefaea7c774ae9c7e6e5b63380c"} Jan 26 19:03:45 crc kubenswrapper[4787]: I0126 19:03:45.071887 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" event={"ID":"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f","Type":"ContainerStarted","Data":"7890052e5caaf68f9a99e1ab09f4083731f06637793ce599a96f5d3cec4c7d7d"} Jan 26 19:03:46 crc kubenswrapper[4787]: I0126 19:03:46.081557 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" event={"ID":"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f","Type":"ContainerStarted","Data":"c46c681e5dd2f7b517c011d458cb9b86f3ac2c884bab1ce8f9d4326622c34e46"} Jan 26 19:03:46 crc kubenswrapper[4787]: I0126 19:03:46.083402 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:46 crc kubenswrapper[4787]: I0126 19:03:46.110664 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" podStartSLOduration=3.110639057 podStartE2EDuration="3.110639057s" podCreationTimestamp="2026-01-26 19:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:46.105331268 +0000 UTC m=+4794.812467421" watchObservedRunningTime="2026-01-26 19:03:46.110639057 +0000 UTC m=+4794.817775230" Jan 26 19:03:46 crc kubenswrapper[4787]: I0126 19:03:46.286294 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="rabbitmq" containerID="cri-o://d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d" gracePeriod=604799 Jan 26 19:03:47 crc kubenswrapper[4787]: I0126 19:03:47.156891 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="rabbitmq" containerID="cri-o://258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648" gracePeriod=604798 Jan 26 19:03:48 crc kubenswrapper[4787]: I0126 19:03:48.590377 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:03:48 crc kubenswrapper[4787]: E0126 19:03:48.590799 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:03:49 crc kubenswrapper[4787]: I0126 19:03:49.167646 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.238:5672: connect: connection refused" Jan 26 19:03:49 crc kubenswrapper[4787]: I0126 19:03:49.429143 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.239:5672: connect: connection refused" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.842157 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974418 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974500 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-server-conf\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974535 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-confd\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974627 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-plugins-conf\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974689 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-erlang-cookie\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974720 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6nc\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-kube-api-access-ps6nc\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974786 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cda9161-6599-4f76-b5b6-d3644a04059a-erlang-cookie-secret\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974827 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cda9161-6599-4f76-b5b6-d3644a04059a-pod-info\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.974851 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-plugins\") pod \"2cda9161-6599-4f76-b5b6-d3644a04059a\" (UID: \"2cda9161-6599-4f76-b5b6-d3644a04059a\") " Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.975621 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.975649 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.975770 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.976107 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.976136 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.976173 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.981095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cda9161-6599-4f76-b5b6-d3644a04059a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.981188 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2cda9161-6599-4f76-b5b6-d3644a04059a-pod-info" (OuterVolumeSpecName: "pod-info") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.990178 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-kube-api-access-ps6nc" (OuterVolumeSpecName: "kube-api-access-ps6nc") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "kube-api-access-ps6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4787]: I0126 19:03:52.998637 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93" (OuterVolumeSpecName: "persistence") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "pvc-85063044-b552-4be6-ae11-9a5761b3ce93". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.003186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-server-conf" (OuterVolumeSpecName: "server-conf") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.066394 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2cda9161-6599-4f76-b5b6-d3644a04059a" (UID: "2cda9161-6599-4f76-b5b6-d3644a04059a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.077329 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cda9161-6599-4f76-b5b6-d3644a04059a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.077384 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") on node \"crc\" " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.077396 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cda9161-6599-4f76-b5b6-d3644a04059a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.077405 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.077416 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6nc\" (UniqueName: \"kubernetes.io/projected/2cda9161-6599-4f76-b5b6-d3644a04059a-kube-api-access-ps6nc\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.077426 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cda9161-6599-4f76-b5b6-d3644a04059a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.096212 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.096394 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-85063044-b552-4be6-ae11-9a5761b3ce93" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93") on node "crc" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.153304 4787 generic.go:334] "Generic (PLEG): container finished" podID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerID="d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d" exitCode=0 Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.153369 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.153673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2cda9161-6599-4f76-b5b6-d3644a04059a","Type":"ContainerDied","Data":"d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d"} Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.153781 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2cda9161-6599-4f76-b5b6-d3644a04059a","Type":"ContainerDied","Data":"6abe01ebeedcd738744caee8ba8bf1ee25a6d868dce9b4b0ca68b4af557a02ab"} Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.153851 4787 scope.go:117] "RemoveContainer" containerID="d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.178511 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.181572 4787 scope.go:117] "RemoveContainer" containerID="bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.200988 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.210307 4787 scope.go:117] "RemoveContainer" containerID="d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d" Jan 26 19:03:53 crc kubenswrapper[4787]: E0126 19:03:53.212135 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d\": container with ID starting with d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d not found: ID does not exist" containerID="d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.212204 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d"} err="failed to get container status \"d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d\": rpc error: code = NotFound desc = could not find container \"d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d\": container with ID starting with d44caef3d47dc41566590f9f650a987b93cb9da2bb11c1cd04d469392c28bd4d not found: ID does not exist" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.212650 4787 scope.go:117] "RemoveContainer" containerID="bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.212771 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:03:53 crc kubenswrapper[4787]: E0126 19:03:53.220431 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5\": container with ID starting with bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5 not found: ID does not exist" containerID="bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.220539 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5"} err="failed to get container status \"bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5\": rpc error: code = NotFound desc = could not find container \"bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5\": container with ID starting with bad09e62091683df8a23f473443931066f9ad9cea92fa59af73be045da3643a5 not found: ID does not exist" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.226577 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:03:53 crc kubenswrapper[4787]: E0126 19:03:53.226862 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="setup-container" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.226875 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="setup-container" Jan 26 19:03:53 crc kubenswrapper[4787]: E0126 19:03:53.226909 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="rabbitmq" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.226915 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="rabbitmq" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.227058 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" containerName="rabbitmq" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.227874 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.235635 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.235980 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mtjmw" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.236294 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.236522 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.237873 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.246331 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391559 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391581 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391733 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391792 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391887 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msx4z\" (UniqueName: \"kubernetes.io/projected/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-kube-api-access-msx4z\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.391935 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.493791 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msx4z\" (UniqueName: \"kubernetes.io/projected/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-kube-api-access-msx4z\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.493859 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.493887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.493929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.494014 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.494039 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.494070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.494103 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.494135 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.494852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.495347 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.496971 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.497015 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3375fc6b96b1abae189094e7496b8ac2aba4ff1a6fae2a30145423eddf7041ea/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.497416 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.498465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.499462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.499936 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.504777 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.512412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msx4z\" (UniqueName: \"kubernetes.io/projected/a09fe28b-c9a5-46b1-a327-c9f4eac2036f-kube-api-access-msx4z\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.535370 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-85063044-b552-4be6-ae11-9a5761b3ce93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85063044-b552-4be6-ae11-9a5761b3ce93\") pod \"rabbitmq-server-0\" (UID: \"a09fe28b-c9a5-46b1-a327-c9f4eac2036f\") " pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.575728 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.610027 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cda9161-6599-4f76-b5b6-d3644a04059a" path="/var/lib/kubelet/pods/2cda9161-6599-4f76-b5b6-d3644a04059a/volumes" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.778761 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906415 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxn4\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-kube-api-access-cjxn4\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906443 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-erlang-cookie-secret\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906517 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-pod-info\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906544 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-server-conf\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906563 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-confd\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906614 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-plugins-conf\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906650 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-plugins\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.906703 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-erlang-cookie\") pod \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\" (UID: \"92fa88a3-7fe7-434f-a74e-9f9c017f5b88\") " Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.907812 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.908826 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.909353 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.911259 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.911361 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-kube-api-access-cjxn4" (OuterVolumeSpecName: "kube-api-access-cjxn4") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "kube-api-access-cjxn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.911393 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-pod-info" (OuterVolumeSpecName: "pod-info") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.929547 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f" (OuterVolumeSpecName: "persistence") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "pvc-87012396-6ac7-490e-8887-418118cdbb8f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.934263 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-server-conf" (OuterVolumeSpecName: "server-conf") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:53 crc kubenswrapper[4787]: I0126 19:03:53.990186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "92fa88a3-7fe7-434f-a74e-9f9c017f5b88" (UID: "92fa88a3-7fe7-434f-a74e-9f9c017f5b88"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.008893 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009248 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009341 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009459 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") on node \"crc\" " Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009558 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxn4\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-kube-api-access-cjxn4\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009638 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009726 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.009907 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.010024 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92fa88a3-7fe7-434f-a74e-9f9c017f5b88-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.027981 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.028223 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-87012396-6ac7-490e-8887-418118cdbb8f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f") on node "crc" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.033159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.086760 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-tjtn6"] Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.087361 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerName="dnsmasq-dns" containerID="cri-o://75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8" gracePeriod=10 Jan 26 19:03:54 crc kubenswrapper[4787]: W0126 19:03:54.108118 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda09fe28b_c9a5_46b1_a327_c9f4eac2036f.slice/crio-cb8ecc04f5309e0b0474de5a198fdcd30d9a1bfc82559846cfb06474e15c44c2 WatchSource:0}: Error finding container cb8ecc04f5309e0b0474de5a198fdcd30d9a1bfc82559846cfb06474e15c44c2: Status 404 returned error can't find the container with id cb8ecc04f5309e0b0474de5a198fdcd30d9a1bfc82559846cfb06474e15c44c2 Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.111106 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.112876 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.169921 4787 generic.go:334] "Generic (PLEG): container finished" podID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerID="258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648" exitCode=0 Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.170018 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92fa88a3-7fe7-434f-a74e-9f9c017f5b88","Type":"ContainerDied","Data":"258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648"} Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.170047 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92fa88a3-7fe7-434f-a74e-9f9c017f5b88","Type":"ContainerDied","Data":"e2d21a5b7a6dd9a8a4fc4251fa9fed758b4e8f61b41a6b2e628da215e4452993"} Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.170062 4787 scope.go:117] "RemoveContainer" containerID="258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.170145 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.183596 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a09fe28b-c9a5-46b1-a327-c9f4eac2036f","Type":"ContainerStarted","Data":"cb8ecc04f5309e0b0474de5a198fdcd30d9a1bfc82559846cfb06474e15c44c2"} Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.214908 4787 scope.go:117] "RemoveContainer" containerID="443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.220108 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.238153 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.251899 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:03:54 crc kubenswrapper[4787]: E0126 19:03:54.252260 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="rabbitmq" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.252272 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="rabbitmq" Jan 26 19:03:54 crc kubenswrapper[4787]: E0126 19:03:54.252291 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="setup-container" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.252297 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="setup-container" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.252434 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" containerName="rabbitmq" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.253215 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.254857 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.257285 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.258469 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fjsvb" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.258817 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.258966 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.265754 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.270472 4787 scope.go:117] "RemoveContainer" containerID="258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648" Jan 26 19:03:54 crc kubenswrapper[4787]: E0126 19:03:54.274584 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648\": container with ID starting with 258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648 not found: ID does not exist" containerID="258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.274619 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648"} err="failed to get container status \"258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648\": rpc error: code = NotFound desc = could not find container \"258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648\": container with ID starting with 258c5098415a7d47be4ef4939764a88b0473ce7b4bf83717f375274814040648 not found: ID does not exist" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.274678 4787 scope.go:117] "RemoveContainer" containerID="443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2" Jan 26 19:03:54 crc kubenswrapper[4787]: E0126 19:03:54.275836 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2\": container with ID starting with 443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2 not found: ID does not exist" containerID="443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.275857 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2"} err="failed to get container status \"443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2\": rpc error: code = NotFound desc = could not find container \"443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2\": container with ID starting with 443dbbae0bcc8303dcfb3168cd35905dc9fc9dcdf778c4677c7778b3e37f04d2 not found: ID does not exist" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414403 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414475 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414549 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414579 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbktq\" (UniqueName: \"kubernetes.io/projected/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-kube-api-access-pbktq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414605 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414632 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414649 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414843 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.414940 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516479 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516549 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516604 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516627 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbktq\" (UniqueName: \"kubernetes.io/projected/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-kube-api-access-pbktq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516648 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516681 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.516786 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.517790 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.518441 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.518805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.519171 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.519198 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3d241a176833a9d493a38ae90635dd34e4a08e60336a801336e4b66d8414864/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.519567 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.520642 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.522310 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.523025 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.537379 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.537815 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbktq\" (UniqueName: \"kubernetes.io/projected/2f7e7278-7c9b-4123-9866-dd61b2dcb23f-kube-api-access-pbktq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.555842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87012396-6ac7-490e-8887-418118cdbb8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87012396-6ac7-490e-8887-418118cdbb8f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2f7e7278-7c9b-4123-9866-dd61b2dcb23f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.582152 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.719197 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj54g\" (UniqueName: \"kubernetes.io/projected/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-kube-api-access-sj54g\") pod \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.719264 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-config\") pod \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.719285 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-dns-svc\") pod \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\" (UID: \"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f\") " Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.723315 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-kube-api-access-sj54g" (OuterVolumeSpecName: "kube-api-access-sj54g") pod "a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" (UID: "a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f"). InnerVolumeSpecName "kube-api-access-sj54g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.761095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-config" (OuterVolumeSpecName: "config") pod "a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" (UID: "a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.766357 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" (UID: "a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.821212 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj54g\" (UniqueName: \"kubernetes.io/projected/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-kube-api-access-sj54g\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.821270 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:54 crc kubenswrapper[4787]: I0126 19:03:54.821289 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.017271 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:03:55 crc kubenswrapper[4787]: W0126 19:03:55.080214 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f7e7278_7c9b_4123_9866_dd61b2dcb23f.slice/crio-903a15b4a95467d0135234d753c5aee814e92f2a7f2e0f58d264af787f8cc652 WatchSource:0}: Error finding container 903a15b4a95467d0135234d753c5aee814e92f2a7f2e0f58d264af787f8cc652: Status 404 returned error can't find the container with id 903a15b4a95467d0135234d753c5aee814e92f2a7f2e0f58d264af787f8cc652 Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.197389 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerID="75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8" exitCode=0 Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.197624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" event={"ID":"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f","Type":"ContainerDied","Data":"75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8"} Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.198013 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" event={"ID":"a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f","Type":"ContainerDied","Data":"1d066a17c787ad61ec45d142843d4001ee201d4c9f7b19ebdecd58bc2e4d613e"} Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.198166 4787 scope.go:117] "RemoveContainer" containerID="75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.197683 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-tjtn6" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.203005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2f7e7278-7c9b-4123-9866-dd61b2dcb23f","Type":"ContainerStarted","Data":"903a15b4a95467d0135234d753c5aee814e92f2a7f2e0f58d264af787f8cc652"} Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.231468 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-tjtn6"] Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.237584 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-tjtn6"] Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.238752 4787 scope.go:117] "RemoveContainer" containerID="29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.257394 4787 scope.go:117] "RemoveContainer" containerID="75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8" Jan 26 19:03:55 crc kubenswrapper[4787]: E0126 19:03:55.257825 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8\": container with ID starting with 75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8 not found: ID does not exist" containerID="75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.257863 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8"} err="failed to get container status \"75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8\": rpc error: code = NotFound desc = could not find container \"75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8\": container with ID starting with 75724d7606794940a9b4507d5b6ec9938f12bdd2051e7b60e774e38edcf11bb8 not found: ID does not exist" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.257889 4787 scope.go:117] "RemoveContainer" containerID="29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679" Jan 26 19:03:55 crc kubenswrapper[4787]: E0126 19:03:55.258757 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679\": container with ID starting with 29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679 not found: ID does not exist" containerID="29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.258930 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679"} err="failed to get container status \"29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679\": rpc error: code = NotFound desc = could not find container \"29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679\": container with ID starting with 29b9557233a9858c3277d068a8a207a87fc4ec4f187bf8c629995c4a8f24d679 not found: ID does not exist" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.605830 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92fa88a3-7fe7-434f-a74e-9f9c017f5b88" path="/var/lib/kubelet/pods/92fa88a3-7fe7-434f-a74e-9f9c017f5b88/volumes" Jan 26 19:03:55 crc kubenswrapper[4787]: I0126 19:03:55.607089 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" path="/var/lib/kubelet/pods/a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f/volumes" Jan 26 19:03:56 crc kubenswrapper[4787]: I0126 19:03:56.221766 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a09fe28b-c9a5-46b1-a327-c9f4eac2036f","Type":"ContainerStarted","Data":"ec353532a28f0e703e29575f08c40ba33d0f64fc5ea870b2c7bdc8329dc363cb"} Jan 26 19:03:57 crc kubenswrapper[4787]: I0126 19:03:57.233849 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2f7e7278-7c9b-4123-9866-dd61b2dcb23f","Type":"ContainerStarted","Data":"6fc50402c24dfb6a217578ab9f1a8f2ee8078dd5f6aca56241bfaca4a1166335"} Jan 26 19:03:59 crc kubenswrapper[4787]: I0126 19:03:59.590278 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:03:59 crc kubenswrapper[4787]: E0126 19:03:59.591007 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:04:12 crc kubenswrapper[4787]: I0126 19:04:12.589721 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:04:12 crc kubenswrapper[4787]: E0126 19:04:12.609111 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:04:26 crc kubenswrapper[4787]: I0126 19:04:26.589068 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:04:26 crc kubenswrapper[4787]: E0126 19:04:26.589799 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:04:28 crc kubenswrapper[4787]: I0126 19:04:28.483096 4787 generic.go:334] "Generic (PLEG): container finished" podID="a09fe28b-c9a5-46b1-a327-c9f4eac2036f" containerID="ec353532a28f0e703e29575f08c40ba33d0f64fc5ea870b2c7bdc8329dc363cb" exitCode=0 Jan 26 19:04:28 crc kubenswrapper[4787]: I0126 19:04:28.483211 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a09fe28b-c9a5-46b1-a327-c9f4eac2036f","Type":"ContainerDied","Data":"ec353532a28f0e703e29575f08c40ba33d0f64fc5ea870b2c7bdc8329dc363cb"} Jan 26 19:04:29 crc kubenswrapper[4787]: I0126 19:04:29.492285 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a09fe28b-c9a5-46b1-a327-c9f4eac2036f","Type":"ContainerStarted","Data":"54b3381ba71a422ba824ec4c40c9cfa44e2b8cf1ad8beb75a04227aadfe21d86"} Jan 26 19:04:29 crc kubenswrapper[4787]: I0126 19:04:29.493405 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 19:04:29 crc kubenswrapper[4787]: I0126 19:04:29.493616 4787 generic.go:334] "Generic (PLEG): container finished" podID="2f7e7278-7c9b-4123-9866-dd61b2dcb23f" containerID="6fc50402c24dfb6a217578ab9f1a8f2ee8078dd5f6aca56241bfaca4a1166335" exitCode=0 Jan 26 19:04:29 crc kubenswrapper[4787]: I0126 19:04:29.493660 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2f7e7278-7c9b-4123-9866-dd61b2dcb23f","Type":"ContainerDied","Data":"6fc50402c24dfb6a217578ab9f1a8f2ee8078dd5f6aca56241bfaca4a1166335"} Jan 26 19:04:29 crc kubenswrapper[4787]: I0126 19:04:29.529772 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.529743393 podStartE2EDuration="36.529743393s" podCreationTimestamp="2026-01-26 19:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:29.51822987 +0000 UTC m=+4838.225366023" watchObservedRunningTime="2026-01-26 19:04:29.529743393 +0000 UTC m=+4838.236879526" Jan 26 19:04:30 crc kubenswrapper[4787]: I0126 19:04:30.503561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2f7e7278-7c9b-4123-9866-dd61b2dcb23f","Type":"ContainerStarted","Data":"4d79a1f3cf06e8d79f4494fd77f5c66934163163e7910673a74fb44619320244"} Jan 26 19:04:30 crc kubenswrapper[4787]: I0126 19:04:30.531791 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.531772377 podStartE2EDuration="36.531772377s" podCreationTimestamp="2026-01-26 19:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:30.527701747 +0000 UTC m=+4839.234837880" watchObservedRunningTime="2026-01-26 19:04:30.531772377 +0000 UTC m=+4839.238908510" Jan 26 19:04:34 crc kubenswrapper[4787]: I0126 19:04:34.583205 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:04:37 crc kubenswrapper[4787]: I0126 19:04:37.592051 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:04:37 crc kubenswrapper[4787]: E0126 19:04:37.592825 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:04:43 crc kubenswrapper[4787]: I0126 19:04:43.777927 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 19:04:44 crc kubenswrapper[4787]: I0126 19:04:44.586088 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:04:51 crc kubenswrapper[4787]: I0126 19:04:51.594233 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:04:51 crc kubenswrapper[4787]: I0126 19:04:51.845055 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"e7ef07c1a92edc30d89515dae5c23c492bf520ebdb32affc92bae7157355d378"} Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.967884 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 26 19:04:55 crc kubenswrapper[4787]: E0126 19:04:55.968781 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerName="init" Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.968795 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerName="init" Jan 26 19:04:55 crc kubenswrapper[4787]: E0126 19:04:55.968807 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerName="dnsmasq-dns" Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.968813 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerName="dnsmasq-dns" Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.968962 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fcea1c-e71b-45b4-8cd7-2b9c1d898e9f" containerName="dnsmasq-dns" Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.969478 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.971920 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9wp7t" Jan 26 19:04:55 crc kubenswrapper[4787]: I0126 19:04:55.980375 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.051398 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298wm\" (UniqueName: \"kubernetes.io/projected/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577-kube-api-access-298wm\") pod \"mariadb-client\" (UID: \"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577\") " pod="openstack/mariadb-client" Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.153501 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298wm\" (UniqueName: \"kubernetes.io/projected/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577-kube-api-access-298wm\") pod \"mariadb-client\" (UID: \"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577\") " pod="openstack/mariadb-client" Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.177195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298wm\" (UniqueName: \"kubernetes.io/projected/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577-kube-api-access-298wm\") pod \"mariadb-client\" (UID: \"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577\") " pod="openstack/mariadb-client" Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.306364 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.806912 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.812921 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:04:56 crc kubenswrapper[4787]: I0126 19:04:56.879004 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577","Type":"ContainerStarted","Data":"96952f35d542e28e5b64406b951f03a7ef629a3a0db7eebaf9d79e3b67650637"} Jan 26 19:04:57 crc kubenswrapper[4787]: I0126 19:04:57.886882 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577","Type":"ContainerStarted","Data":"3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b"} Jan 26 19:04:57 crc kubenswrapper[4787]: I0126 19:04:57.905929 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.16107479 podStartE2EDuration="2.90590702s" podCreationTimestamp="2026-01-26 19:04:55 +0000 UTC" firstStartedPulling="2026-01-26 19:04:56.812713305 +0000 UTC m=+4865.519849438" lastFinishedPulling="2026-01-26 19:04:57.557545535 +0000 UTC m=+4866.264681668" observedRunningTime="2026-01-26 19:04:57.901121381 +0000 UTC m=+4866.608257534" watchObservedRunningTime="2026-01-26 19:04:57.90590702 +0000 UTC m=+4866.613043143" Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.421459 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.422308 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" containerName="mariadb-client" containerID="cri-o://3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b" gracePeriod=30 Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.910942 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.989343 4787 generic.go:334] "Generic (PLEG): container finished" podID="43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" containerID="3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b" exitCode=143 Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.989390 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577","Type":"ContainerDied","Data":"3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b"} Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.989426 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577","Type":"ContainerDied","Data":"96952f35d542e28e5b64406b951f03a7ef629a3a0db7eebaf9d79e3b67650637"} Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.989449 4787 scope.go:117] "RemoveContainer" containerID="3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b" Jan 26 19:05:10 crc kubenswrapper[4787]: I0126 19:05:10.989503 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.010562 4787 scope.go:117] "RemoveContainer" containerID="3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b" Jan 26 19:05:11 crc kubenswrapper[4787]: E0126 19:05:11.010989 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b\": container with ID starting with 3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b not found: ID does not exist" containerID="3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b" Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.011029 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b"} err="failed to get container status \"3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b\": rpc error: code = NotFound desc = could not find container \"3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b\": container with ID starting with 3686150da3359b99e72bba2e08267cfe4dede31409eb8e6055121ef49939e70b not found: ID does not exist" Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.039776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-298wm\" (UniqueName: \"kubernetes.io/projected/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577-kube-api-access-298wm\") pod \"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577\" (UID: \"43273fd0-35ae-4d1a-bc42-8ac4ae7f1577\") " Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.044885 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577-kube-api-access-298wm" (OuterVolumeSpecName: "kube-api-access-298wm") pod "43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" (UID: "43273fd0-35ae-4d1a-bc42-8ac4ae7f1577"). InnerVolumeSpecName "kube-api-access-298wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.141780 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-298wm\" (UniqueName: \"kubernetes.io/projected/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577-kube-api-access-298wm\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.333238 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.341122 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:05:11 crc kubenswrapper[4787]: I0126 19:05:11.596558 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" path="/var/lib/kubelet/pods/43273fd0-35ae-4d1a-bc42-8ac4ae7f1577/volumes" Jan 26 19:05:58 crc kubenswrapper[4787]: I0126 19:05:58.139820 4787 scope.go:117] "RemoveContainer" containerID="369734c6cac9d784cbf95fb8d954e7cee7c4521d3e58e16256b597ddd703b8cf" Jan 26 19:07:16 crc kubenswrapper[4787]: I0126 19:07:16.808508 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:07:16 crc kubenswrapper[4787]: I0126 19:07:16.809312 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.635108 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7k2hb"] Jan 26 19:07:46 crc kubenswrapper[4787]: E0126 19:07:46.636464 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" containerName="mariadb-client" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.636496 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" containerName="mariadb-client" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.636807 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="43273fd0-35ae-4d1a-bc42-8ac4ae7f1577" containerName="mariadb-client" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.639185 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.650878 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7k2hb"] Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.712546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7pdd\" (UniqueName: \"kubernetes.io/projected/ea9c9695-0d69-4297-a493-941161a95352-kube-api-access-d7pdd\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.712692 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-utilities\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.712721 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-catalog-content\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.808250 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.808305 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.814166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-utilities\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.814246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-catalog-content\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.814343 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7pdd\" (UniqueName: \"kubernetes.io/projected/ea9c9695-0d69-4297-a493-941161a95352-kube-api-access-d7pdd\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.815303 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-utilities\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.815569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-catalog-content\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.837412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7pdd\" (UniqueName: \"kubernetes.io/projected/ea9c9695-0d69-4297-a493-941161a95352-kube-api-access-d7pdd\") pod \"redhat-operators-7k2hb\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:46 crc kubenswrapper[4787]: I0126 19:07:46.977351 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:47 crc kubenswrapper[4787]: I0126 19:07:47.411807 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7k2hb"] Jan 26 19:07:48 crc kubenswrapper[4787]: I0126 19:07:48.218686 4787 generic.go:334] "Generic (PLEG): container finished" podID="ea9c9695-0d69-4297-a493-941161a95352" containerID="0ab6cddab6280607c06422b2515b0b944109837aaa285c077414a2e332fc696f" exitCode=0 Jan 26 19:07:48 crc kubenswrapper[4787]: I0126 19:07:48.218734 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerDied","Data":"0ab6cddab6280607c06422b2515b0b944109837aaa285c077414a2e332fc696f"} Jan 26 19:07:48 crc kubenswrapper[4787]: I0126 19:07:48.219176 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerStarted","Data":"fe2760f98048dcf8dbafbbcaf42b5b3d4380aeddc2738753307bb33aece0aa9e"} Jan 26 19:07:49 crc kubenswrapper[4787]: I0126 19:07:49.232888 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerStarted","Data":"2859e82bfc24e54ca2a9cbfff5d2b81bad2565be5d719dea78bada266406dc95"} Jan 26 19:07:50 crc kubenswrapper[4787]: I0126 19:07:50.242929 4787 generic.go:334] "Generic (PLEG): container finished" podID="ea9c9695-0d69-4297-a493-941161a95352" containerID="2859e82bfc24e54ca2a9cbfff5d2b81bad2565be5d719dea78bada266406dc95" exitCode=0 Jan 26 19:07:50 crc kubenswrapper[4787]: I0126 19:07:50.242993 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerDied","Data":"2859e82bfc24e54ca2a9cbfff5d2b81bad2565be5d719dea78bada266406dc95"} Jan 26 19:07:51 crc kubenswrapper[4787]: I0126 19:07:51.251472 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerStarted","Data":"6f7d8f721717e8d9fcbc95c219e34428b2445dcf0af6e306ed7bfc08e0b6a507"} Jan 26 19:07:51 crc kubenswrapper[4787]: I0126 19:07:51.274629 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7k2hb" podStartSLOduration=2.8648997510000003 podStartE2EDuration="5.274612461s" podCreationTimestamp="2026-01-26 19:07:46 +0000 UTC" firstStartedPulling="2026-01-26 19:07:48.220392467 +0000 UTC m=+5036.927528620" lastFinishedPulling="2026-01-26 19:07:50.630105197 +0000 UTC m=+5039.337241330" observedRunningTime="2026-01-26 19:07:51.270865769 +0000 UTC m=+5039.978001902" watchObservedRunningTime="2026-01-26 19:07:51.274612461 +0000 UTC m=+5039.981748594" Jan 26 19:07:56 crc kubenswrapper[4787]: I0126 19:07:56.977712 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:56 crc kubenswrapper[4787]: I0126 19:07:56.978508 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:57 crc kubenswrapper[4787]: I0126 19:07:57.025030 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:57 crc kubenswrapper[4787]: I0126 19:07:57.358705 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:07:57 crc kubenswrapper[4787]: I0126 19:07:57.407702 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7k2hb"] Jan 26 19:07:59 crc kubenswrapper[4787]: I0126 19:07:59.328682 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7k2hb" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="registry-server" containerID="cri-o://6f7d8f721717e8d9fcbc95c219e34428b2445dcf0af6e306ed7bfc08e0b6a507" gracePeriod=2 Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.356258 4787 generic.go:334] "Generic (PLEG): container finished" podID="ea9c9695-0d69-4297-a493-941161a95352" containerID="6f7d8f721717e8d9fcbc95c219e34428b2445dcf0af6e306ed7bfc08e0b6a507" exitCode=0 Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.356331 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerDied","Data":"6f7d8f721717e8d9fcbc95c219e34428b2445dcf0af6e306ed7bfc08e0b6a507"} Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.517501 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.573604 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-utilities\") pod \"ea9c9695-0d69-4297-a493-941161a95352\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.573702 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7pdd\" (UniqueName: \"kubernetes.io/projected/ea9c9695-0d69-4297-a493-941161a95352-kube-api-access-d7pdd\") pod \"ea9c9695-0d69-4297-a493-941161a95352\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.573829 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-catalog-content\") pod \"ea9c9695-0d69-4297-a493-941161a95352\" (UID: \"ea9c9695-0d69-4297-a493-941161a95352\") " Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.574759 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-utilities" (OuterVolumeSpecName: "utilities") pod "ea9c9695-0d69-4297-a493-941161a95352" (UID: "ea9c9695-0d69-4297-a493-941161a95352"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.579712 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9c9695-0d69-4297-a493-941161a95352-kube-api-access-d7pdd" (OuterVolumeSpecName: "kube-api-access-d7pdd") pod "ea9c9695-0d69-4297-a493-941161a95352" (UID: "ea9c9695-0d69-4297-a493-941161a95352"). InnerVolumeSpecName "kube-api-access-d7pdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.681303 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.681346 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7pdd\" (UniqueName: \"kubernetes.io/projected/ea9c9695-0d69-4297-a493-941161a95352-kube-api-access-d7pdd\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.699042 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea9c9695-0d69-4297-a493-941161a95352" (UID: "ea9c9695-0d69-4297-a493-941161a95352"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:08:02 crc kubenswrapper[4787]: I0126 19:08:02.783110 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c9695-0d69-4297-a493-941161a95352-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.369251 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k2hb" event={"ID":"ea9c9695-0d69-4297-a493-941161a95352","Type":"ContainerDied","Data":"fe2760f98048dcf8dbafbbcaf42b5b3d4380aeddc2738753307bb33aece0aa9e"} Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.369335 4787 scope.go:117] "RemoveContainer" containerID="6f7d8f721717e8d9fcbc95c219e34428b2445dcf0af6e306ed7bfc08e0b6a507" Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.369393 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k2hb" Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.394439 4787 scope.go:117] "RemoveContainer" containerID="2859e82bfc24e54ca2a9cbfff5d2b81bad2565be5d719dea78bada266406dc95" Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.416206 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7k2hb"] Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.425337 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7k2hb"] Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.436313 4787 scope.go:117] "RemoveContainer" containerID="0ab6cddab6280607c06422b2515b0b944109837aaa285c077414a2e332fc696f" Jan 26 19:08:03 crc kubenswrapper[4787]: I0126 19:08:03.598201 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9c9695-0d69-4297-a493-941161a95352" path="/var/lib/kubelet/pods/ea9c9695-0d69-4297-a493-941161a95352/volumes" Jan 26 19:08:16 crc kubenswrapper[4787]: I0126 19:08:16.807815 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:08:16 crc kubenswrapper[4787]: I0126 19:08:16.808347 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:08:16 crc kubenswrapper[4787]: I0126 19:08:16.808404 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:08:16 crc kubenswrapper[4787]: I0126 19:08:16.809121 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7ef07c1a92edc30d89515dae5c23c492bf520ebdb32affc92bae7157355d378"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:08:16 crc kubenswrapper[4787]: I0126 19:08:16.809188 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://e7ef07c1a92edc30d89515dae5c23c492bf520ebdb32affc92bae7157355d378" gracePeriod=600 Jan 26 19:08:17 crc kubenswrapper[4787]: I0126 19:08:17.476255 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="e7ef07c1a92edc30d89515dae5c23c492bf520ebdb32affc92bae7157355d378" exitCode=0 Jan 26 19:08:17 crc kubenswrapper[4787]: I0126 19:08:17.476335 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"e7ef07c1a92edc30d89515dae5c23c492bf520ebdb32affc92bae7157355d378"} Jan 26 19:08:17 crc kubenswrapper[4787]: I0126 19:08:17.476886 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14"} Jan 26 19:08:17 crc kubenswrapper[4787]: I0126 19:08:17.476914 4787 scope.go:117] "RemoveContainer" containerID="f436860bd226b1d671cd193d332f05458bbe93b70200035949f12ceff9563691" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.241782 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 26 19:09:02 crc kubenswrapper[4787]: E0126 19:09:02.242589 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="extract-content" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.242607 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="extract-content" Jan 26 19:09:02 crc kubenswrapper[4787]: E0126 19:09:02.242653 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="registry-server" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.242664 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="registry-server" Jan 26 19:09:02 crc kubenswrapper[4787]: E0126 19:09:02.242686 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="extract-utilities" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.242694 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="extract-utilities" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.242831 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9c9695-0d69-4297-a493-941161a95352" containerName="registry-server" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.243394 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.246764 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9wp7t" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.260416 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.432369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb489be6-d81b-4825-998f-703f00435dc0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.432442 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pwv\" (UniqueName: \"kubernetes.io/projected/1f03be67-34a7-411e-ae84-cbad607741f2-kube-api-access-n9pwv\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.534082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fb489be6-d81b-4825-998f-703f00435dc0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.534140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pwv\" (UniqueName: \"kubernetes.io/projected/1f03be67-34a7-411e-ae84-cbad607741f2-kube-api-access-n9pwv\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.538589 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.538688 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fb489be6-d81b-4825-998f-703f00435dc0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/349b96abd71869323abe7b1051d87d4a07c60964665eae160d8a22d7e4633a08/globalmount\"" pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.559587 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pwv\" (UniqueName: \"kubernetes.io/projected/1f03be67-34a7-411e-ae84-cbad607741f2-kube-api-access-n9pwv\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.573897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fb489be6-d81b-4825-998f-703f00435dc0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") pod \"mariadb-copy-data\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " pod="openstack/mariadb-copy-data" Jan 26 19:09:02 crc kubenswrapper[4787]: I0126 19:09:02.870365 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 26 19:09:03 crc kubenswrapper[4787]: I0126 19:09:03.429204 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 26 19:09:03 crc kubenswrapper[4787]: I0126 19:09:03.860343 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1f03be67-34a7-411e-ae84-cbad607741f2","Type":"ContainerStarted","Data":"b65fb13bc25f2784797256942b8c740673e6157c36ce4f622f404e9395a54d5a"} Jan 26 19:09:03 crc kubenswrapper[4787]: I0126 19:09:03.860855 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1f03be67-34a7-411e-ae84-cbad607741f2","Type":"ContainerStarted","Data":"a3f80a9ab8f2599b2442fb0227e8ae606b6f6e85e6e5131122029fce07dc5d62"} Jan 26 19:09:03 crc kubenswrapper[4787]: I0126 19:09:03.886386 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.886358511 podStartE2EDuration="2.886358511s" podCreationTimestamp="2026-01-26 19:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:03.879542354 +0000 UTC m=+5112.586678497" watchObservedRunningTime="2026-01-26 19:09:03.886358511 +0000 UTC m=+5112.593494684" Jan 26 19:09:06 crc kubenswrapper[4787]: I0126 19:09:06.838435 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:06 crc kubenswrapper[4787]: I0126 19:09:06.840926 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:06 crc kubenswrapper[4787]: I0126 19:09:06.863381 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:07 crc kubenswrapper[4787]: I0126 19:09:07.010028 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxpq\" (UniqueName: \"kubernetes.io/projected/fa46f11c-318a-4ac4-af30-68dee2522fc0-kube-api-access-6nxpq\") pod \"mariadb-client\" (UID: \"fa46f11c-318a-4ac4-af30-68dee2522fc0\") " pod="openstack/mariadb-client" Jan 26 19:09:07 crc kubenswrapper[4787]: I0126 19:09:07.112307 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxpq\" (UniqueName: \"kubernetes.io/projected/fa46f11c-318a-4ac4-af30-68dee2522fc0-kube-api-access-6nxpq\") pod \"mariadb-client\" (UID: \"fa46f11c-318a-4ac4-af30-68dee2522fc0\") " pod="openstack/mariadb-client" Jan 26 19:09:07 crc kubenswrapper[4787]: I0126 19:09:07.137258 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxpq\" (UniqueName: \"kubernetes.io/projected/fa46f11c-318a-4ac4-af30-68dee2522fc0-kube-api-access-6nxpq\") pod \"mariadb-client\" (UID: \"fa46f11c-318a-4ac4-af30-68dee2522fc0\") " pod="openstack/mariadb-client" Jan 26 19:09:07 crc kubenswrapper[4787]: I0126 19:09:07.175171 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:08 crc kubenswrapper[4787]: I0126 19:09:08.292847 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:08 crc kubenswrapper[4787]: I0126 19:09:08.907465 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa46f11c-318a-4ac4-af30-68dee2522fc0" containerID="9c42500fe69b59b61ac4f021cadac3e6f3a7b4458dc85571dd95b2e932070b5f" exitCode=0 Jan 26 19:09:08 crc kubenswrapper[4787]: I0126 19:09:08.907588 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fa46f11c-318a-4ac4-af30-68dee2522fc0","Type":"ContainerDied","Data":"9c42500fe69b59b61ac4f021cadac3e6f3a7b4458dc85571dd95b2e932070b5f"} Jan 26 19:09:08 crc kubenswrapper[4787]: I0126 19:09:08.909009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fa46f11c-318a-4ac4-af30-68dee2522fc0","Type":"ContainerStarted","Data":"fcc5f9336168765f44839d20daff9a6b6a76f2d6c411fb788825c21b08dddbbe"} Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.461656 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qkcnt"] Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.463812 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.503611 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkcnt"] Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.551440 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcl5x\" (UniqueName: \"kubernetes.io/projected/bb25ae95-0952-470e-9c22-7a70aa61715f-kube-api-access-zcl5x\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.551528 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-catalog-content\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.551860 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-utilities\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.653576 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcl5x\" (UniqueName: \"kubernetes.io/projected/bb25ae95-0952-470e-9c22-7a70aa61715f-kube-api-access-zcl5x\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.653929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-catalog-content\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.654021 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-utilities\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.654462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-utilities\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.654533 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-catalog-content\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.674734 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcl5x\" (UniqueName: \"kubernetes.io/projected/bb25ae95-0952-470e-9c22-7a70aa61715f-kube-api-access-zcl5x\") pod \"certified-operators-qkcnt\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:09 crc kubenswrapper[4787]: I0126 19:09:09.829204 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.238166 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.271703 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_fa46f11c-318a-4ac4-af30-68dee2522fc0/mariadb-client/0.log" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.316748 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.324031 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.334597 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkcnt"] Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.365034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nxpq\" (UniqueName: \"kubernetes.io/projected/fa46f11c-318a-4ac4-af30-68dee2522fc0-kube-api-access-6nxpq\") pod \"fa46f11c-318a-4ac4-af30-68dee2522fc0\" (UID: \"fa46f11c-318a-4ac4-af30-68dee2522fc0\") " Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.370260 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa46f11c-318a-4ac4-af30-68dee2522fc0-kube-api-access-6nxpq" (OuterVolumeSpecName: "kube-api-access-6nxpq") pod "fa46f11c-318a-4ac4-af30-68dee2522fc0" (UID: "fa46f11c-318a-4ac4-af30-68dee2522fc0"). InnerVolumeSpecName "kube-api-access-6nxpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.429077 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:10 crc kubenswrapper[4787]: E0126 19:09:10.429554 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa46f11c-318a-4ac4-af30-68dee2522fc0" containerName="mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.429579 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa46f11c-318a-4ac4-af30-68dee2522fc0" containerName="mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.429774 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa46f11c-318a-4ac4-af30-68dee2522fc0" containerName="mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.430480 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.435047 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.466824 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nxpq\" (UniqueName: \"kubernetes.io/projected/fa46f11c-318a-4ac4-af30-68dee2522fc0-kube-api-access-6nxpq\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.568464 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7lj\" (UniqueName: \"kubernetes.io/projected/5f91dd9c-6d70-4105-a1a6-b3d443d67a89-kube-api-access-dw7lj\") pod \"mariadb-client\" (UID: \"5f91dd9c-6d70-4105-a1a6-b3d443d67a89\") " pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.671539 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw7lj\" (UniqueName: \"kubernetes.io/projected/5f91dd9c-6d70-4105-a1a6-b3d443d67a89-kube-api-access-dw7lj\") pod \"mariadb-client\" (UID: \"5f91dd9c-6d70-4105-a1a6-b3d443d67a89\") " pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.694101 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw7lj\" (UniqueName: \"kubernetes.io/projected/5f91dd9c-6d70-4105-a1a6-b3d443d67a89-kube-api-access-dw7lj\") pod \"mariadb-client\" (UID: \"5f91dd9c-6d70-4105-a1a6-b3d443d67a89\") " pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.756002 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.927801 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerID="012651d623d85ed3baf08613c3d0b03ba87b299c66f0dbe879cc22968ded4511" exitCode=0 Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.927900 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerDied","Data":"012651d623d85ed3baf08613c3d0b03ba87b299c66f0dbe879cc22968ded4511"} Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.927931 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerStarted","Data":"ecc4526b405e4f2ffe3afeebb59d9e7f52e878ff27f2ba3204c45ac5063e6291"} Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.930370 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc5f9336168765f44839d20daff9a6b6a76f2d6c411fb788825c21b08dddbbe" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.930439 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:10 crc kubenswrapper[4787]: I0126 19:09:10.955461 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="fa46f11c-318a-4ac4-af30-68dee2522fc0" podUID="5f91dd9c-6d70-4105-a1a6-b3d443d67a89" Jan 26 19:09:11 crc kubenswrapper[4787]: I0126 19:09:11.177222 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:11 crc kubenswrapper[4787]: I0126 19:09:11.597819 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa46f11c-318a-4ac4-af30-68dee2522fc0" path="/var/lib/kubelet/pods/fa46f11c-318a-4ac4-af30-68dee2522fc0/volumes" Jan 26 19:09:11 crc kubenswrapper[4787]: I0126 19:09:11.943395 4787 generic.go:334] "Generic (PLEG): container finished" podID="5f91dd9c-6d70-4105-a1a6-b3d443d67a89" containerID="5e4558b00a84faefbe8617afa45d3e015cf2dacac0d534549cef5bafc0d97e15" exitCode=0 Jan 26 19:09:11 crc kubenswrapper[4787]: I0126 19:09:11.943452 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"5f91dd9c-6d70-4105-a1a6-b3d443d67a89","Type":"ContainerDied","Data":"5e4558b00a84faefbe8617afa45d3e015cf2dacac0d534549cef5bafc0d97e15"} Jan 26 19:09:11 crc kubenswrapper[4787]: I0126 19:09:11.943842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"5f91dd9c-6d70-4105-a1a6-b3d443d67a89","Type":"ContainerStarted","Data":"1e4e08806e9178727c1ea3d82c615b3967472094e05e84770c7e7187f86928c5"} Jan 26 19:09:11 crc kubenswrapper[4787]: I0126 19:09:11.946668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerStarted","Data":"d96ff4056725b94b9fe6245b0e14ffe015d8a60ad2491930a459dc5509e17234"} Jan 26 19:09:12 crc kubenswrapper[4787]: I0126 19:09:12.959247 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerID="d96ff4056725b94b9fe6245b0e14ffe015d8a60ad2491930a459dc5509e17234" exitCode=0 Jan 26 19:09:12 crc kubenswrapper[4787]: I0126 19:09:12.959325 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerDied","Data":"d96ff4056725b94b9fe6245b0e14ffe015d8a60ad2491930a459dc5509e17234"} Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.313212 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.335499 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_5f91dd9c-6d70-4105-a1a6-b3d443d67a89/mariadb-client/0.log" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.363026 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.368830 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.413621 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw7lj\" (UniqueName: \"kubernetes.io/projected/5f91dd9c-6d70-4105-a1a6-b3d443d67a89-kube-api-access-dw7lj\") pod \"5f91dd9c-6d70-4105-a1a6-b3d443d67a89\" (UID: \"5f91dd9c-6d70-4105-a1a6-b3d443d67a89\") " Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.419438 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f91dd9c-6d70-4105-a1a6-b3d443d67a89-kube-api-access-dw7lj" (OuterVolumeSpecName: "kube-api-access-dw7lj") pod "5f91dd9c-6d70-4105-a1a6-b3d443d67a89" (UID: "5f91dd9c-6d70-4105-a1a6-b3d443d67a89"). InnerVolumeSpecName "kube-api-access-dw7lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.515704 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw7lj\" (UniqueName: \"kubernetes.io/projected/5f91dd9c-6d70-4105-a1a6-b3d443d67a89-kube-api-access-dw7lj\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.600575 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f91dd9c-6d70-4105-a1a6-b3d443d67a89" path="/var/lib/kubelet/pods/5f91dd9c-6d70-4105-a1a6-b3d443d67a89/volumes" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.970432 4787 scope.go:117] "RemoveContainer" containerID="5e4558b00a84faefbe8617afa45d3e015cf2dacac0d534549cef5bafc0d97e15" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.970570 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 26 19:09:13 crc kubenswrapper[4787]: I0126 19:09:13.974746 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerStarted","Data":"403d5a9ef59d5e2a731933d2864222f47b3f507cc32d096240be329e997e26e3"} Jan 26 19:09:14 crc kubenswrapper[4787]: I0126 19:09:14.006538 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qkcnt" podStartSLOduration=2.5693782179999998 podStartE2EDuration="5.00651414s" podCreationTimestamp="2026-01-26 19:09:09 +0000 UTC" firstStartedPulling="2026-01-26 19:09:10.929668221 +0000 UTC m=+5119.636804354" lastFinishedPulling="2026-01-26 19:09:13.366804143 +0000 UTC m=+5122.073940276" observedRunningTime="2026-01-26 19:09:13.998825681 +0000 UTC m=+5122.705961844" watchObservedRunningTime="2026-01-26 19:09:14.00651414 +0000 UTC m=+5122.713650273" Jan 26 19:09:19 crc kubenswrapper[4787]: I0126 19:09:19.830363 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:19 crc kubenswrapper[4787]: I0126 19:09:19.830931 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:19 crc kubenswrapper[4787]: I0126 19:09:19.888764 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:20 crc kubenswrapper[4787]: I0126 19:09:20.099467 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:23 crc kubenswrapper[4787]: I0126 19:09:23.912052 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qkcnt"] Jan 26 19:09:23 crc kubenswrapper[4787]: I0126 19:09:23.913062 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qkcnt" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="registry-server" containerID="cri-o://403d5a9ef59d5e2a731933d2864222f47b3f507cc32d096240be329e997e26e3" gracePeriod=2 Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.071275 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerID="403d5a9ef59d5e2a731933d2864222f47b3f507cc32d096240be329e997e26e3" exitCode=0 Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.071366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerDied","Data":"403d5a9ef59d5e2a731933d2864222f47b3f507cc32d096240be329e997e26e3"} Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.366523 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.493387 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-catalog-content\") pod \"bb25ae95-0952-470e-9c22-7a70aa61715f\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.493533 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-utilities\") pod \"bb25ae95-0952-470e-9c22-7a70aa61715f\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.493579 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcl5x\" (UniqueName: \"kubernetes.io/projected/bb25ae95-0952-470e-9c22-7a70aa61715f-kube-api-access-zcl5x\") pod \"bb25ae95-0952-470e-9c22-7a70aa61715f\" (UID: \"bb25ae95-0952-470e-9c22-7a70aa61715f\") " Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.494641 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-utilities" (OuterVolumeSpecName: "utilities") pod "bb25ae95-0952-470e-9c22-7a70aa61715f" (UID: "bb25ae95-0952-470e-9c22-7a70aa61715f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.500361 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb25ae95-0952-470e-9c22-7a70aa61715f-kube-api-access-zcl5x" (OuterVolumeSpecName: "kube-api-access-zcl5x") pod "bb25ae95-0952-470e-9c22-7a70aa61715f" (UID: "bb25ae95-0952-470e-9c22-7a70aa61715f"). InnerVolumeSpecName "kube-api-access-zcl5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.567315 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb25ae95-0952-470e-9c22-7a70aa61715f" (UID: "bb25ae95-0952-470e-9c22-7a70aa61715f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.595433 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.595472 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb25ae95-0952-470e-9c22-7a70aa61715f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:24 crc kubenswrapper[4787]: I0126 19:09:24.595481 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcl5x\" (UniqueName: \"kubernetes.io/projected/bb25ae95-0952-470e-9c22-7a70aa61715f-kube-api-access-zcl5x\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.081669 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkcnt" event={"ID":"bb25ae95-0952-470e-9c22-7a70aa61715f","Type":"ContainerDied","Data":"ecc4526b405e4f2ffe3afeebb59d9e7f52e878ff27f2ba3204c45ac5063e6291"} Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.081854 4787 scope.go:117] "RemoveContainer" containerID="403d5a9ef59d5e2a731933d2864222f47b3f507cc32d096240be329e997e26e3" Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.081874 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkcnt" Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.105871 4787 scope.go:117] "RemoveContainer" containerID="d96ff4056725b94b9fe6245b0e14ffe015d8a60ad2491930a459dc5509e17234" Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.116225 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qkcnt"] Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.124780 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qkcnt"] Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.149270 4787 scope.go:117] "RemoveContainer" containerID="012651d623d85ed3baf08613c3d0b03ba87b299c66f0dbe879cc22968ded4511" Jan 26 19:09:25 crc kubenswrapper[4787]: I0126 19:09:25.603674 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" path="/var/lib/kubelet/pods/bb25ae95-0952-470e-9c22-7a70aa61715f/volumes" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.523201 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 19:09:54 crc kubenswrapper[4787]: E0126 19:09:54.524565 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f91dd9c-6d70-4105-a1a6-b3d443d67a89" containerName="mariadb-client" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.524583 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f91dd9c-6d70-4105-a1a6-b3d443d67a89" containerName="mariadb-client" Jan 26 19:09:54 crc kubenswrapper[4787]: E0126 19:09:54.524604 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="extract-content" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.524612 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="extract-content" Jan 26 19:09:54 crc kubenswrapper[4787]: E0126 19:09:54.524625 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="extract-utilities" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.524633 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="extract-utilities" Jan 26 19:09:54 crc kubenswrapper[4787]: E0126 19:09:54.524655 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="registry-server" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.524663 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="registry-server" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.524823 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb25ae95-0952-470e-9c22-7a70aa61715f" containerName="registry-server" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.524844 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f91dd9c-6d70-4105-a1a6-b3d443d67a89" containerName="mariadb-client" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.525888 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.527610 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.527632 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.527746 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nvhv6" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.548231 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.566305 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.567596 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.571442 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.576417 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.607792 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.616999 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644390 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-config\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644474 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fdcdea-726a-4606-adec-82e9dbf50e97-config\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644533 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr5jl\" (UniqueName: \"kubernetes.io/projected/d6fdcdea-726a-4606-adec-82e9dbf50e97-kube-api-access-lr5jl\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644573 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d6fdcdea-726a-4606-adec-82e9dbf50e97-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644611 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644649 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6fdcdea-726a-4606-adec-82e9dbf50e97-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644707 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab33df63-0c1f-4915-982b-780e022b97f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab33df63-0c1f-4915-982b-780e022b97f3\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fa6d828a-b742-449b-a6b4-6969662721e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa6d828a-b742-449b-a6b4-6969662721e1\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644798 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644848 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlw8l\" (UniqueName: \"kubernetes.io/projected/07252574-f41c-451b-a7c2-1dd0c52dc509-kube-api-access-jlw8l\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6fdcdea-726a-4606-adec-82e9dbf50e97-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07252574-f41c-451b-a7c2-1dd0c52dc509-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.644999 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07252574-f41c-451b-a7c2-1dd0c52dc509-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.645043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.645080 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.645113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07252574-f41c-451b-a7c2-1dd0c52dc509-config\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.645189 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvj4\" (UniqueName: \"kubernetes.io/projected/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-kube-api-access-xcvj4\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.645228 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07252574-f41c-451b-a7c2-1dd0c52dc509-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.749449 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fa6d828a-b742-449b-a6b4-6969662721e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa6d828a-b742-449b-a6b4-6969662721e1\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.750083 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.750520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlw8l\" (UniqueName: \"kubernetes.io/projected/07252574-f41c-451b-a7c2-1dd0c52dc509-kube-api-access-jlw8l\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.750852 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6fdcdea-726a-4606-adec-82e9dbf50e97-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.751137 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.751241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07252574-f41c-451b-a7c2-1dd0c52dc509-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.751863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07252574-f41c-451b-a7c2-1dd0c52dc509-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.752376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6fdcdea-726a-4606-adec-82e9dbf50e97-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.752499 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07252574-f41c-451b-a7c2-1dd0c52dc509-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.755682 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.756250 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.756543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07252574-f41c-451b-a7c2-1dd0c52dc509-config\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.756885 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvj4\" (UniqueName: \"kubernetes.io/projected/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-kube-api-access-xcvj4\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.757241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07252574-f41c-451b-a7c2-1dd0c52dc509-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.757743 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-config\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.758206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fdcdea-726a-4606-adec-82e9dbf50e97-config\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.770744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr5jl\" (UniqueName: \"kubernetes.io/projected/d6fdcdea-726a-4606-adec-82e9dbf50e97-kube-api-access-lr5jl\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.759773 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07252574-f41c-451b-a7c2-1dd0c52dc509-config\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.760562 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07252574-f41c-451b-a7c2-1dd0c52dc509-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.761900 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-config\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.761932 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.762464 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fdcdea-726a-4606-adec-82e9dbf50e97-config\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.771138 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.771999 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a80b366c1e34624b17f9ba3c750e7bac49f340f7532c1d39870457fa5723795e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.771166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d6fdcdea-726a-4606-adec-82e9dbf50e97-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.772083 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.772107 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6fdcdea-726a-4606-adec-82e9dbf50e97-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.772156 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab33df63-0c1f-4915-982b-780e022b97f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab33df63-0c1f-4915-982b-780e022b97f3\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.771658 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.772433 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fa6d828a-b742-449b-a6b4-6969662721e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa6d828a-b742-449b-a6b4-6969662721e1\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0b85d90c1fa637cfe9d3d7109185d31e0b94e321bfe5ce50da17eb6935547ef/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.771417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d6fdcdea-726a-4606-adec-82e9dbf50e97-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.758655 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07252574-f41c-451b-a7c2-1dd0c52dc509-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.775532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.776992 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.781675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6fdcdea-726a-4606-adec-82e9dbf50e97-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.786362 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.787508 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.788808 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab33df63-0c1f-4915-982b-780e022b97f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab33df63-0c1f-4915-982b-780e022b97f3\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ec2faf4ab94776809c5a4b6c5b738a757d03b088a5122a24d07d682eb67d6f0/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.787552 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.795460 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.795647 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.796451 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.796697 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ggrqb" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.801991 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlw8l\" (UniqueName: \"kubernetes.io/projected/07252574-f41c-451b-a7c2-1dd0c52dc509-kube-api-access-jlw8l\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.802080 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.807154 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.809083 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.818405 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvj4\" (UniqueName: \"kubernetes.io/projected/48ab69d9-de2a-4b50-9b0a-cb9c3f8975df-kube-api-access-xcvj4\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.818659 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr5jl\" (UniqueName: \"kubernetes.io/projected/d6fdcdea-726a-4606-adec-82e9dbf50e97-kube-api-access-lr5jl\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.821411 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.829038 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.847735 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09fb36d0-5e1f-4911-94f5-95487aada7cd\") pod \"ovsdbserver-nb-2\" (UID: \"07252574-f41c-451b-a7c2-1dd0c52dc509\") " pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.873994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fa6d828a-b742-449b-a6b4-6969662721e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa6d828a-b742-449b-a6b4-6969662721e1\") pod \"ovsdbserver-nb-1\" (UID: \"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df\") " pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875171 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8757dfc6-75dd-4355-9533-c78a28f42aff-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875231 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8757dfc6-75dd-4355-9533-c78a28f42aff-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9316b7-843a-46b9-ade9-1fb19a748269-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2936574c-79b1-4311-92b6-3f9c430851fe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875388 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8757dfc6-75dd-4355-9533-c78a28f42aff-config\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875415 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875483 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2936574c-79b1-4311-92b6-3f9c430851fe-config\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjcj\" (UniqueName: \"kubernetes.io/projected/0d9316b7-843a-46b9-ade9-1fb19a748269-kube-api-access-rxjcj\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875573 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757dfc6-75dd-4355-9533-c78a28f42aff-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875627 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2936574c-79b1-4311-92b6-3f9c430851fe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9316b7-843a-46b9-ade9-1fb19a748269-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-407b4118-076e-4c6e-b261-69ee7615e19b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-407b4118-076e-4c6e-b261-69ee7615e19b\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.875789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjcb\" (UniqueName: \"kubernetes.io/projected/8757dfc6-75dd-4355-9533-c78a28f42aff-kube-api-access-vfjcb\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.876049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8z7r\" (UniqueName: \"kubernetes.io/projected/2936574c-79b1-4311-92b6-3f9c430851fe-kube-api-access-d8z7r\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.876073 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d9316b7-843a-46b9-ade9-1fb19a748269-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.876123 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936574c-79b1-4311-92b6-3f9c430851fe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.876140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9316b7-843a-46b9-ade9-1fb19a748269-config\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.881138 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab33df63-0c1f-4915-982b-780e022b97f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab33df63-0c1f-4915-982b-780e022b97f3\") pod \"ovsdbserver-nb-0\" (UID: \"d6fdcdea-726a-4606-adec-82e9dbf50e97\") " pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.893836 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.902014 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985117 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8757dfc6-75dd-4355-9533-c78a28f42aff-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985438 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8757dfc6-75dd-4355-9533-c78a28f42aff-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9316b7-843a-46b9-ade9-1fb19a748269-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2936574c-79b1-4311-92b6-3f9c430851fe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985548 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8757dfc6-75dd-4355-9533-c78a28f42aff-config\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985578 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2936574c-79b1-4311-92b6-3f9c430851fe-config\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985678 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjcj\" (UniqueName: \"kubernetes.io/projected/0d9316b7-843a-46b9-ade9-1fb19a748269-kube-api-access-rxjcj\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757dfc6-75dd-4355-9533-c78a28f42aff-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985781 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2936574c-79b1-4311-92b6-3f9c430851fe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985852 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9316b7-843a-46b9-ade9-1fb19a748269-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985888 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-407b4118-076e-4c6e-b261-69ee7615e19b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-407b4118-076e-4c6e-b261-69ee7615e19b\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.985913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfjcb\" (UniqueName: \"kubernetes.io/projected/8757dfc6-75dd-4355-9533-c78a28f42aff-kube-api-access-vfjcb\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.986028 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8z7r\" (UniqueName: \"kubernetes.io/projected/2936574c-79b1-4311-92b6-3f9c430851fe-kube-api-access-d8z7r\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.986053 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d9316b7-843a-46b9-ade9-1fb19a748269-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.986082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936574c-79b1-4311-92b6-3f9c430851fe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.986116 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9316b7-843a-46b9-ade9-1fb19a748269-config\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.987256 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9316b7-843a-46b9-ade9-1fb19a748269-config\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.987666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8757dfc6-75dd-4355-9533-c78a28f42aff-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.988671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8757dfc6-75dd-4355-9533-c78a28f42aff-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.989897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2936574c-79b1-4311-92b6-3f9c430851fe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.990870 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9316b7-843a-46b9-ade9-1fb19a748269-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.992378 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d9316b7-843a-46b9-ade9-1fb19a748269-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.993062 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8757dfc6-75dd-4355-9533-c78a28f42aff-config\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.993764 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2936574c-79b1-4311-92b6-3f9c430851fe-config\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.994068 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2936574c-79b1-4311-92b6-3f9c430851fe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.994373 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9316b7-843a-46b9-ade9-1fb19a748269-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.995099 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757dfc6-75dd-4355-9533-c78a28f42aff-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.995737 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.995762 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff5d3980ab7cfcabe5da83dcd41ef6f78fdc7e19ce7b499d4746e53a64bc9cff/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.996780 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.996800 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-407b4118-076e-4c6e-b261-69ee7615e19b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-407b4118-076e-4c6e-b261-69ee7615e19b\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0f676cab9ccc4de20704487e9e4778923a97483d19876968833ecaf876d2a5a/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.997637 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:09:54 crc kubenswrapper[4787]: I0126 19:09:54.997668 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1023f49646832d44e7f40b8ba5be5f30b7af8c16897e8e7c1561d03220097e9c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.000413 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936574c-79b1-4311-92b6-3f9c430851fe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.009267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfjcb\" (UniqueName: \"kubernetes.io/projected/8757dfc6-75dd-4355-9533-c78a28f42aff-kube-api-access-vfjcb\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.036916 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjcj\" (UniqueName: \"kubernetes.io/projected/0d9316b7-843a-46b9-ade9-1fb19a748269-kube-api-access-rxjcj\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.038574 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8z7r\" (UniqueName: \"kubernetes.io/projected/2936574c-79b1-4311-92b6-3f9c430851fe-kube-api-access-d8z7r\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.051496 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-407b4118-076e-4c6e-b261-69ee7615e19b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-407b4118-076e-4c6e-b261-69ee7615e19b\") pod \"ovsdbserver-sb-2\" (UID: \"8757dfc6-75dd-4355-9533-c78a28f42aff\") " pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.060940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c559a29b-66ad-416c-b7ed-79a6f96be49c\") pod \"ovsdbserver-sb-1\" (UID: \"0d9316b7-843a-46b9-ade9-1fb19a748269\") " pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.064202 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66e1d8f3-e10e-4dca-abf6-fc0acf5edd7f\") pod \"ovsdbserver-sb-0\" (UID: \"2936574c-79b1-4311-92b6-3f9c430851fe\") " pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.147593 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.207804 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.246702 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.257255 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.456561 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.543452 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 26 19:09:55 crc kubenswrapper[4787]: W0126 19:09:55.571822 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ab69d9_de2a_4b50_9b0a_cb9c3f8975df.slice/crio-a07ca7b67f1e774bfde0be22bf0602aded6b41374c6c29ab05269fe50d5dd8a1 WatchSource:0}: Error finding container a07ca7b67f1e774bfde0be22bf0602aded6b41374c6c29ab05269fe50d5dd8a1: Status 404 returned error can't find the container with id a07ca7b67f1e774bfde0be22bf0602aded6b41374c6c29ab05269fe50d5dd8a1 Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.656387 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.833986 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 19:09:55 crc kubenswrapper[4787]: I0126 19:09:55.922247 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 26 19:09:55 crc kubenswrapper[4787]: W0126 19:09:55.923532 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8757dfc6_75dd_4355_9533_c78a28f42aff.slice/crio-db53be878a2dec72aba8a16547fecdb2f5fbf2692e2d864b4fb5aa97746b4b2d WatchSource:0}: Error finding container db53be878a2dec72aba8a16547fecdb2f5fbf2692e2d864b4fb5aa97746b4b2d: Status 404 returned error can't find the container with id db53be878a2dec72aba8a16547fecdb2f5fbf2692e2d864b4fb5aa97746b4b2d Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.365292 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d6fdcdea-726a-4606-adec-82e9dbf50e97","Type":"ContainerStarted","Data":"92bd749541cae37dd04f82c84041fdebdb86bbb11a83bcb1d15a1a48b9be1dcb"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.365578 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d6fdcdea-726a-4606-adec-82e9dbf50e97","Type":"ContainerStarted","Data":"d19bf904e14fa9cd3220cf96a61d9abe050daa569a748ccb6e85a93b7274c5cf"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.365662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d6fdcdea-726a-4606-adec-82e9dbf50e97","Type":"ContainerStarted","Data":"085b46ee6feb36252272a552d92407cb33c5ce4af1cc6f8e10763ebf95430ef1"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.367460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8757dfc6-75dd-4355-9533-c78a28f42aff","Type":"ContainerStarted","Data":"15034a6df4eea6e3146b03caab8b78498e20eef862ab6aa4244ad501807e1fe6"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.367489 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8757dfc6-75dd-4355-9533-c78a28f42aff","Type":"ContainerStarted","Data":"2cbe06a42cf81ab55571c76ae864fb29fcdb8ecdd5ad5d31dbda75987eeabdcb"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.367502 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8757dfc6-75dd-4355-9533-c78a28f42aff","Type":"ContainerStarted","Data":"db53be878a2dec72aba8a16547fecdb2f5fbf2692e2d864b4fb5aa97746b4b2d"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.368969 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2936574c-79b1-4311-92b6-3f9c430851fe","Type":"ContainerStarted","Data":"5765f80d64efcae796e83c5f1fc23f2ea56fbc8c7a2ae7bdbd99939b8e136b48"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.369014 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2936574c-79b1-4311-92b6-3f9c430851fe","Type":"ContainerStarted","Data":"1afed767872889b24c83859def22a35b67da2273000154b9bab8987f502ad4f1"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.369027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2936574c-79b1-4311-92b6-3f9c430851fe","Type":"ContainerStarted","Data":"ff3b010d61265fb421ffae33f54d170fcb2d8cd03c3dcaa08043edf130879cd3"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.370576 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df","Type":"ContainerStarted","Data":"69d426eb039d515cd0b2833fe89bee696643a837e19e3d95b96b208f58e51c8e"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.370612 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df","Type":"ContainerStarted","Data":"d913440ee977054edcd0a76e2c29aa7261ab5ea65e46fb97b3ce86fd872a23d6"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.370640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48ab69d9-de2a-4b50-9b0a-cb9c3f8975df","Type":"ContainerStarted","Data":"a07ca7b67f1e774bfde0be22bf0602aded6b41374c6c29ab05269fe50d5dd8a1"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.371897 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"07252574-f41c-451b-a7c2-1dd0c52dc509","Type":"ContainerStarted","Data":"c2387e00ae4e899b931116c22a674131dce5434e8336060fc083cdb018739b0b"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.371964 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"07252574-f41c-451b-a7c2-1dd0c52dc509","Type":"ContainerStarted","Data":"a8ddd24fc1b9bf654a9ec34face5eb3c96f124157ebc0e4ce3d44302c3a7b87c"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.371978 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"07252574-f41c-451b-a7c2-1dd0c52dc509","Type":"ContainerStarted","Data":"a696838eb5b6de97f88048412cb4708ff3ca24c858a00d7de8fe1c26a42ed54c"} Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.385762 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.385743707 podStartE2EDuration="3.385743707s" podCreationTimestamp="2026-01-26 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:56.382936008 +0000 UTC m=+5165.090072161" watchObservedRunningTime="2026-01-26 19:09:56.385743707 +0000 UTC m=+5165.092879850" Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.408257 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.40823927 podStartE2EDuration="3.40823927s" podCreationTimestamp="2026-01-26 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:56.403249226 +0000 UTC m=+5165.110385359" watchObservedRunningTime="2026-01-26 19:09:56.40823927 +0000 UTC m=+5165.115375403" Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.423393 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.423373782 podStartE2EDuration="3.423373782s" podCreationTimestamp="2026-01-26 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:56.422114911 +0000 UTC m=+5165.129251074" watchObservedRunningTime="2026-01-26 19:09:56.423373782 +0000 UTC m=+5165.130509915" Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.442193 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.442177584 podStartE2EDuration="3.442177584s" podCreationTimestamp="2026-01-26 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:56.439811136 +0000 UTC m=+5165.146947279" watchObservedRunningTime="2026-01-26 19:09:56.442177584 +0000 UTC m=+5165.149313717" Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.462299 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.462282849 podStartE2EDuration="3.462282849s" podCreationTimestamp="2026-01-26 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:56.458235899 +0000 UTC m=+5165.165372022" watchObservedRunningTime="2026-01-26 19:09:56.462282849 +0000 UTC m=+5165.169418982" Jan 26 19:09:56 crc kubenswrapper[4787]: I0126 19:09:56.602718 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 26 19:09:56 crc kubenswrapper[4787]: W0126 19:09:56.606469 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9316b7_843a_46b9_ade9_1fb19a748269.slice/crio-b8d89ca4b2da5776899519abcf76123c826c188ba24c2698bc1a8ff358a657d8 WatchSource:0}: Error finding container b8d89ca4b2da5776899519abcf76123c826c188ba24c2698bc1a8ff358a657d8: Status 404 returned error can't find the container with id b8d89ca4b2da5776899519abcf76123c826c188ba24c2698bc1a8ff358a657d8 Jan 26 19:09:57 crc kubenswrapper[4787]: I0126 19:09:57.389255 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"0d9316b7-843a-46b9-ade9-1fb19a748269","Type":"ContainerStarted","Data":"3c8d1128b2dcdf37bbb6bf7048311cdb8e928cc6d0eaf4bb9ee7104b8079cfd8"} Jan 26 19:09:57 crc kubenswrapper[4787]: I0126 19:09:57.389670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"0d9316b7-843a-46b9-ade9-1fb19a748269","Type":"ContainerStarted","Data":"9c0f2f62a3853ebf83594a5728d7649bb9532a890a726c4b6c54a55804a411dd"} Jan 26 19:09:57 crc kubenswrapper[4787]: I0126 19:09:57.389694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"0d9316b7-843a-46b9-ade9-1fb19a748269","Type":"ContainerStarted","Data":"b8d89ca4b2da5776899519abcf76123c826c188ba24c2698bc1a8ff358a657d8"} Jan 26 19:09:57 crc kubenswrapper[4787]: I0126 19:09:57.419176 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.419159342 podStartE2EDuration="4.419159342s" podCreationTimestamp="2026-01-26 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:09:57.418558777 +0000 UTC m=+5166.125694950" watchObservedRunningTime="2026-01-26 19:09:57.419159342 +0000 UTC m=+5166.126295475" Jan 26 19:09:57 crc kubenswrapper[4787]: I0126 19:09:57.894393 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:57 crc kubenswrapper[4787]: I0126 19:09:57.902618 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 26 19:09:58 crc kubenswrapper[4787]: I0126 19:09:58.149116 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 19:09:58 crc kubenswrapper[4787]: I0126 19:09:58.208782 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 19:09:58 crc kubenswrapper[4787]: I0126 19:09:58.247143 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 26 19:09:58 crc kubenswrapper[4787]: I0126 19:09:58.257532 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 26 19:09:58 crc kubenswrapper[4787]: I0126 19:09:58.281910 4787 scope.go:117] "RemoveContainer" containerID="f8a53801702323235138bdba325eed37fb51247a43bc771e70d9e48be5c89e1d" Jan 26 19:09:59 crc kubenswrapper[4787]: I0126 19:09:59.894194 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 26 19:09:59 crc kubenswrapper[4787]: I0126 19:09:59.902421 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.149041 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.209016 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.247112 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.257559 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.941740 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.949746 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 26 19:10:00 crc kubenswrapper[4787]: I0126 19:10:00.993432 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.003287 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.206523 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.281623 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b55dccf7-lckdp"] Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.283996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.284075 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.284816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.287280 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.300551 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b55dccf7-lckdp"] Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.335798 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.341544 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.349329 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.391859 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.411709 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrdhq\" (UniqueName: \"kubernetes.io/projected/ef56df07-e13c-490a-ae33-3ccc21494e13-kube-api-access-mrdhq\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.411765 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-config\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.411793 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-ovsdbserver-nb\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.411890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-dns-svc\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.468566 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.513322 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdhq\" (UniqueName: \"kubernetes.io/projected/ef56df07-e13c-490a-ae33-3ccc21494e13-kube-api-access-mrdhq\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.513383 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-config\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.513438 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-ovsdbserver-nb\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.513606 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-dns-svc\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.514508 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-config\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.514530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-dns-svc\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.514981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-ovsdbserver-nb\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.547075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdhq\" (UniqueName: \"kubernetes.io/projected/ef56df07-e13c-490a-ae33-3ccc21494e13-kube-api-access-mrdhq\") pod \"dnsmasq-dns-9b55dccf7-lckdp\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.620765 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.767586 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b55dccf7-lckdp"] Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.809774 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7684477b9f-kgcfc"] Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.812035 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.817924 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.834179 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7684477b9f-kgcfc"] Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.919628 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-dns-svc\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.919692 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-sb\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.919728 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjq99\" (UniqueName: \"kubernetes.io/projected/59c34360-12fe-4ad8-a7fc-f66984bfb115-kube-api-access-cjq99\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.919774 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-config\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:01 crc kubenswrapper[4787]: I0126 19:10:01.919793 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-nb\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.021471 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-dns-svc\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.021522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-sb\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.021544 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjq99\" (UniqueName: \"kubernetes.io/projected/59c34360-12fe-4ad8-a7fc-f66984bfb115-kube-api-access-cjq99\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.021581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-config\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.021595 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-nb\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.022539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-nb\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.022544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-config\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.022596 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-dns-svc\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.023055 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-sb\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.044273 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjq99\" (UniqueName: \"kubernetes.io/projected/59c34360-12fe-4ad8-a7fc-f66984bfb115-kube-api-access-cjq99\") pod \"dnsmasq-dns-7684477b9f-kgcfc\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.197588 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.238718 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b55dccf7-lckdp"] Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.435767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" event={"ID":"ef56df07-e13c-490a-ae33-3ccc21494e13","Type":"ContainerStarted","Data":"948ad9d4934ec451e99b5901763398e3d338009af20775c9fe3303996de5e6e4"} Jan 26 19:10:02 crc kubenswrapper[4787]: I0126 19:10:02.662598 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7684477b9f-kgcfc"] Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.451117 4787 generic.go:334] "Generic (PLEG): container finished" podID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerID="645377a5b1968428fb0faff92ec5e61fc1091b1518b76a3d30be27c3cc3c39b8" exitCode=0 Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.451202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" event={"ID":"59c34360-12fe-4ad8-a7fc-f66984bfb115","Type":"ContainerDied","Data":"645377a5b1968428fb0faff92ec5e61fc1091b1518b76a3d30be27c3cc3c39b8"} Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.451376 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" event={"ID":"59c34360-12fe-4ad8-a7fc-f66984bfb115","Type":"ContainerStarted","Data":"0475017b8010b27396fe14ee4e1978a99ed4e1a9073151db832371e9a0d6f0a3"} Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.453641 4787 generic.go:334] "Generic (PLEG): container finished" podID="ef56df07-e13c-490a-ae33-3ccc21494e13" containerID="fc1ba0ecd4cf9087fa0c9c677d7922606c0d2b538dc4ad7cb526fb363ef9aa8f" exitCode=0 Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.453683 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" event={"ID":"ef56df07-e13c-490a-ae33-3ccc21494e13","Type":"ContainerDied","Data":"fc1ba0ecd4cf9087fa0c9c677d7922606c0d2b538dc4ad7cb526fb363ef9aa8f"} Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.785212 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.852095 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrdhq\" (UniqueName: \"kubernetes.io/projected/ef56df07-e13c-490a-ae33-3ccc21494e13-kube-api-access-mrdhq\") pod \"ef56df07-e13c-490a-ae33-3ccc21494e13\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.852254 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-dns-svc\") pod \"ef56df07-e13c-490a-ae33-3ccc21494e13\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.852300 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-config\") pod \"ef56df07-e13c-490a-ae33-3ccc21494e13\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.852383 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-ovsdbserver-nb\") pod \"ef56df07-e13c-490a-ae33-3ccc21494e13\" (UID: \"ef56df07-e13c-490a-ae33-3ccc21494e13\") " Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.856344 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef56df07-e13c-490a-ae33-3ccc21494e13-kube-api-access-mrdhq" (OuterVolumeSpecName: "kube-api-access-mrdhq") pod "ef56df07-e13c-490a-ae33-3ccc21494e13" (UID: "ef56df07-e13c-490a-ae33-3ccc21494e13"). InnerVolumeSpecName "kube-api-access-mrdhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.871082 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-config" (OuterVolumeSpecName: "config") pod "ef56df07-e13c-490a-ae33-3ccc21494e13" (UID: "ef56df07-e13c-490a-ae33-3ccc21494e13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.874523 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef56df07-e13c-490a-ae33-3ccc21494e13" (UID: "ef56df07-e13c-490a-ae33-3ccc21494e13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.876146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef56df07-e13c-490a-ae33-3ccc21494e13" (UID: "ef56df07-e13c-490a-ae33-3ccc21494e13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.917433 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 26 19:10:03 crc kubenswrapper[4787]: E0126 19:10:03.918071 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef56df07-e13c-490a-ae33-3ccc21494e13" containerName="init" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.918089 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef56df07-e13c-490a-ae33-3ccc21494e13" containerName="init" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.918278 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef56df07-e13c-490a-ae33-3ccc21494e13" containerName="init" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.918969 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.922087 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.932447 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954026 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954109 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97l64\" (UniqueName: \"kubernetes.io/projected/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-kube-api-access-97l64\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954303 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41145ebd-8c73-445c-b980-b07ef122220f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954569 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrdhq\" (UniqueName: \"kubernetes.io/projected/ef56df07-e13c-490a-ae33-3ccc21494e13-kube-api-access-mrdhq\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954625 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954641 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:03 crc kubenswrapper[4787]: I0126 19:10:03.954653 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef56df07-e13c-490a-ae33-3ccc21494e13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.055464 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97l64\" (UniqueName: \"kubernetes.io/projected/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-kube-api-access-97l64\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.055528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41145ebd-8c73-445c-b980-b07ef122220f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.055814 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.058683 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.058727 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41145ebd-8c73-445c-b980-b07ef122220f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/923df7c1b9cef3ce2a02ec070b8b0593c7b43064a12459383f01df28dcc48c8b/globalmount\"" pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.059699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.072228 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97l64\" (UniqueName: \"kubernetes.io/projected/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-kube-api-access-97l64\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.084836 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41145ebd-8c73-445c-b980-b07ef122220f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") pod \"ovn-copy-data\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.252586 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.468884 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" event={"ID":"59c34360-12fe-4ad8-a7fc-f66984bfb115","Type":"ContainerStarted","Data":"6fb28ef69a3d5fee9a8ee6faa8ca7754967bb78a4504ae284ef733e8cd69b082"} Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.470116 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.471511 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" event={"ID":"ef56df07-e13c-490a-ae33-3ccc21494e13","Type":"ContainerDied","Data":"948ad9d4934ec451e99b5901763398e3d338009af20775c9fe3303996de5e6e4"} Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.471544 4787 scope.go:117] "RemoveContainer" containerID="fc1ba0ecd4cf9087fa0c9c677d7922606c0d2b538dc4ad7cb526fb363ef9aa8f" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.471628 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b55dccf7-lckdp" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.493164 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" podStartSLOduration=3.493143494 podStartE2EDuration="3.493143494s" podCreationTimestamp="2026-01-26 19:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:04.486396179 +0000 UTC m=+5173.193532312" watchObservedRunningTime="2026-01-26 19:10:04.493143494 +0000 UTC m=+5173.200279627" Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.527958 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b55dccf7-lckdp"] Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.575003 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b55dccf7-lckdp"] Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.727705 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 26 19:10:04 crc kubenswrapper[4787]: W0126 19:10:04.731124 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7afaee_0b36_4d28_b0e7_929cb87aac6c.slice/crio-c27da389f3fe92c5abf74c9f7867316741bfa92078af85b0a914d16b2072422f WatchSource:0}: Error finding container c27da389f3fe92c5abf74c9f7867316741bfa92078af85b0a914d16b2072422f: Status 404 returned error can't find the container with id c27da389f3fe92c5abf74c9f7867316741bfa92078af85b0a914d16b2072422f Jan 26 19:10:04 crc kubenswrapper[4787]: I0126 19:10:04.732671 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:10:05 crc kubenswrapper[4787]: I0126 19:10:05.490783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ba7afaee-0b36-4d28-b0e7-929cb87aac6c","Type":"ContainerStarted","Data":"d84935215f791bb70771441dd72a2579d55f82085e4fa44a39d970aaae5c6c38"} Jan 26 19:10:05 crc kubenswrapper[4787]: I0126 19:10:05.491069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ba7afaee-0b36-4d28-b0e7-929cb87aac6c","Type":"ContainerStarted","Data":"c27da389f3fe92c5abf74c9f7867316741bfa92078af85b0a914d16b2072422f"} Jan 26 19:10:05 crc kubenswrapper[4787]: I0126 19:10:05.508893 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.920306515 podStartE2EDuration="3.508877235s" podCreationTimestamp="2026-01-26 19:10:02 +0000 UTC" firstStartedPulling="2026-01-26 19:10:04.732490838 +0000 UTC m=+5173.439626971" lastFinishedPulling="2026-01-26 19:10:05.321061558 +0000 UTC m=+5174.028197691" observedRunningTime="2026-01-26 19:10:05.507089861 +0000 UTC m=+5174.214226004" watchObservedRunningTime="2026-01-26 19:10:05.508877235 +0000 UTC m=+5174.216013368" Jan 26 19:10:05 crc kubenswrapper[4787]: I0126 19:10:05.601486 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef56df07-e13c-490a-ae33-3ccc21494e13" path="/var/lib/kubelet/pods/ef56df07-e13c-490a-ae33-3ccc21494e13/volumes" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.434755 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.440030 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.443011 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.443359 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.445671 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-t8tb2" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.458741 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.590018 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jq8\" (UniqueName: \"kubernetes.io/projected/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-kube-api-access-45jq8\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.590250 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.590326 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-scripts\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.591156 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.591222 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-config\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.691867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.691918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-scripts\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.691997 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.692037 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-config\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.692072 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jq8\" (UniqueName: \"kubernetes.io/projected/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-kube-api-access-45jq8\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.692661 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.693171 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-config\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.693200 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-scripts\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.698751 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.708271 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jq8\" (UniqueName: \"kubernetes.io/projected/5d3d8e05-de0b-4435-b921-76ebd8bf99c9-kube-api-access-45jq8\") pod \"ovn-northd-0\" (UID: \"5d3d8e05-de0b-4435-b921-76ebd8bf99c9\") " pod="openstack/ovn-northd-0" Jan 26 19:10:10 crc kubenswrapper[4787]: I0126 19:10:10.759019 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 19:10:11 crc kubenswrapper[4787]: I0126 19:10:11.217964 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 19:10:11 crc kubenswrapper[4787]: I0126 19:10:11.547179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5d3d8e05-de0b-4435-b921-76ebd8bf99c9","Type":"ContainerStarted","Data":"d1c0bf9583b84b56ad0931b29866032dfcd002f727cc7bd0fb96c4d9b84f1a5d"} Jan 26 19:10:11 crc kubenswrapper[4787]: I0126 19:10:11.548072 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 19:10:11 crc kubenswrapper[4787]: I0126 19:10:11.548111 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5d3d8e05-de0b-4435-b921-76ebd8bf99c9","Type":"ContainerStarted","Data":"79427aee7c1f9152395a77db5b023cef73d7b2a6e596da94a2e21bcb7483468e"} Jan 26 19:10:11 crc kubenswrapper[4787]: I0126 19:10:11.548134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5d3d8e05-de0b-4435-b921-76ebd8bf99c9","Type":"ContainerStarted","Data":"083702256c600efbfa91515f188440cd1e503c2fb3a8c1bf2e0bc6fa80bc3e93"} Jan 26 19:10:11 crc kubenswrapper[4787]: I0126 19:10:11.563627 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.563608326 podStartE2EDuration="1.563608326s" podCreationTimestamp="2026-01-26 19:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:11.563304939 +0000 UTC m=+5180.270441082" watchObservedRunningTime="2026-01-26 19:10:11.563608326 +0000 UTC m=+5180.270744469" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.200174 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.255027 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-rfgzb"] Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.255262 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerName="dnsmasq-dns" containerID="cri-o://c46c681e5dd2f7b517c011d458cb9b86f3ac2c884bab1ce8f9d4326622c34e46" gracePeriod=10 Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.557106 4787 generic.go:334] "Generic (PLEG): container finished" podID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerID="c46c681e5dd2f7b517c011d458cb9b86f3ac2c884bab1ce8f9d4326622c34e46" exitCode=0 Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.557208 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" event={"ID":"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f","Type":"ContainerDied","Data":"c46c681e5dd2f7b517c011d458cb9b86f3ac2c884bab1ce8f9d4326622c34e46"} Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.730927 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.848370 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-config\") pod \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.848750 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrzvn\" (UniqueName: \"kubernetes.io/projected/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-kube-api-access-mrzvn\") pod \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.848795 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-dns-svc\") pod \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\" (UID: \"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f\") " Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.856091 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-kube-api-access-mrzvn" (OuterVolumeSpecName: "kube-api-access-mrzvn") pod "eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" (UID: "eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f"). InnerVolumeSpecName "kube-api-access-mrzvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.887273 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" (UID: "eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.891551 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-config" (OuterVolumeSpecName: "config") pod "eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" (UID: "eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.950701 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.950736 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrzvn\" (UniqueName: \"kubernetes.io/projected/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-kube-api-access-mrzvn\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:12 crc kubenswrapper[4787]: I0126 19:10:12.950754 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:13 crc kubenswrapper[4787]: I0126 19:10:13.567235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" event={"ID":"eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f","Type":"ContainerDied","Data":"7890052e5caaf68f9a99e1ab09f4083731f06637793ce599a96f5d3cec4c7d7d"} Jan 26 19:10:13 crc kubenswrapper[4787]: I0126 19:10:13.567314 4787 scope.go:117] "RemoveContainer" containerID="c46c681e5dd2f7b517c011d458cb9b86f3ac2c884bab1ce8f9d4326622c34e46" Jan 26 19:10:13 crc kubenswrapper[4787]: I0126 19:10:13.567317 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-rfgzb" Jan 26 19:10:13 crc kubenswrapper[4787]: I0126 19:10:13.614261 4787 scope.go:117] "RemoveContainer" containerID="7a174cb039ee372d9827c857b836be7da4fc0eefaea7c774ae9c7e6e5b63380c" Jan 26 19:10:13 crc kubenswrapper[4787]: I0126 19:10:13.624367 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-rfgzb"] Jan 26 19:10:13 crc kubenswrapper[4787]: I0126 19:10:13.653262 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-rfgzb"] Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.428451 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nrnm9"] Jan 26 19:10:15 crc kubenswrapper[4787]: E0126 19:10:15.428869 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerName="dnsmasq-dns" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.428886 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerName="dnsmasq-dns" Jan 26 19:10:15 crc kubenswrapper[4787]: E0126 19:10:15.428923 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerName="init" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.428932 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerName="init" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.429158 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" containerName="dnsmasq-dns" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.429797 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.436560 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nrnm9"] Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.526160 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f508-account-create-update-2plvr"] Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.527337 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.530459 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.535501 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f508-account-create-update-2plvr"] Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.590235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf090e0a-fc1b-48c1-8678-272ca8ee4901-operator-scripts\") pod \"keystone-db-create-nrnm9\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.590386 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr65m\" (UniqueName: \"kubernetes.io/projected/cf090e0a-fc1b-48c1-8678-272ca8ee4901-kube-api-access-mr65m\") pod \"keystone-db-create-nrnm9\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.598987 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f" path="/var/lib/kubelet/pods/eb08eb3e-1fd2-4a17-9fe9-a99a6fe3114f/volumes" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.692339 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr65m\" (UniqueName: \"kubernetes.io/projected/cf090e0a-fc1b-48c1-8678-272ca8ee4901-kube-api-access-mr65m\") pod \"keystone-db-create-nrnm9\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.692554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f07c3-118d-4640-85ef-154ad9f2c4e8-operator-scripts\") pod \"keystone-f508-account-create-update-2plvr\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.692596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf090e0a-fc1b-48c1-8678-272ca8ee4901-operator-scripts\") pod \"keystone-db-create-nrnm9\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.692641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhzc\" (UniqueName: \"kubernetes.io/projected/158f07c3-118d-4640-85ef-154ad9f2c4e8-kube-api-access-2rhzc\") pod \"keystone-f508-account-create-update-2plvr\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.693537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf090e0a-fc1b-48c1-8678-272ca8ee4901-operator-scripts\") pod \"keystone-db-create-nrnm9\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.715914 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr65m\" (UniqueName: \"kubernetes.io/projected/cf090e0a-fc1b-48c1-8678-272ca8ee4901-kube-api-access-mr65m\") pod \"keystone-db-create-nrnm9\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.748370 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.794373 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f07c3-118d-4640-85ef-154ad9f2c4e8-operator-scripts\") pod \"keystone-f508-account-create-update-2plvr\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.794437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhzc\" (UniqueName: \"kubernetes.io/projected/158f07c3-118d-4640-85ef-154ad9f2c4e8-kube-api-access-2rhzc\") pod \"keystone-f508-account-create-update-2plvr\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.795333 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f07c3-118d-4640-85ef-154ad9f2c4e8-operator-scripts\") pod \"keystone-f508-account-create-update-2plvr\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.813795 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhzc\" (UniqueName: \"kubernetes.io/projected/158f07c3-118d-4640-85ef-154ad9f2c4e8-kube-api-access-2rhzc\") pod \"keystone-f508-account-create-update-2plvr\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:15 crc kubenswrapper[4787]: I0126 19:10:15.843289 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.202778 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nrnm9"] Jan 26 19:10:16 crc kubenswrapper[4787]: W0126 19:10:16.205802 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf090e0a_fc1b_48c1_8678_272ca8ee4901.slice/crio-9f83c8009699069139aec10007e764d9230fbad557ea009dbb094429416dc098 WatchSource:0}: Error finding container 9f83c8009699069139aec10007e764d9230fbad557ea009dbb094429416dc098: Status 404 returned error can't find the container with id 9f83c8009699069139aec10007e764d9230fbad557ea009dbb094429416dc098 Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.367065 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f508-account-create-update-2plvr"] Jan 26 19:10:16 crc kubenswrapper[4787]: W0126 19:10:16.374599 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158f07c3_118d_4640_85ef_154ad9f2c4e8.slice/crio-684341ce8fbd3fe0923826163fa4e3cebf5e680a31e209828a975e33c767b318 WatchSource:0}: Error finding container 684341ce8fbd3fe0923826163fa4e3cebf5e680a31e209828a975e33c767b318: Status 404 returned error can't find the container with id 684341ce8fbd3fe0923826163fa4e3cebf5e680a31e209828a975e33c767b318 Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.589016 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f508-account-create-update-2plvr" event={"ID":"158f07c3-118d-4640-85ef-154ad9f2c4e8","Type":"ContainerStarted","Data":"347c2e1063162c2ff65c8a5391d8e719ce7c0e8559659d8a2b9d34278980b55b"} Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.589065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f508-account-create-update-2plvr" event={"ID":"158f07c3-118d-4640-85ef-154ad9f2c4e8","Type":"ContainerStarted","Data":"684341ce8fbd3fe0923826163fa4e3cebf5e680a31e209828a975e33c767b318"} Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.590665 4787 generic.go:334] "Generic (PLEG): container finished" podID="cf090e0a-fc1b-48c1-8678-272ca8ee4901" containerID="f22587002addd2fce599fe8874abdc705cbe9a6453697670e1579de379d32e12" exitCode=0 Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.590704 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nrnm9" event={"ID":"cf090e0a-fc1b-48c1-8678-272ca8ee4901","Type":"ContainerDied","Data":"f22587002addd2fce599fe8874abdc705cbe9a6453697670e1579de379d32e12"} Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.590724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nrnm9" event={"ID":"cf090e0a-fc1b-48c1-8678-272ca8ee4901","Type":"ContainerStarted","Data":"9f83c8009699069139aec10007e764d9230fbad557ea009dbb094429416dc098"} Jan 26 19:10:16 crc kubenswrapper[4787]: I0126 19:10:16.603810 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f508-account-create-update-2plvr" podStartSLOduration=1.603790354 podStartE2EDuration="1.603790354s" podCreationTimestamp="2026-01-26 19:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:16.603004044 +0000 UTC m=+5185.310140177" watchObservedRunningTime="2026-01-26 19:10:16.603790354 +0000 UTC m=+5185.310926497" Jan 26 19:10:17 crc kubenswrapper[4787]: I0126 19:10:17.623707 4787 generic.go:334] "Generic (PLEG): container finished" podID="158f07c3-118d-4640-85ef-154ad9f2c4e8" containerID="347c2e1063162c2ff65c8a5391d8e719ce7c0e8559659d8a2b9d34278980b55b" exitCode=0 Jan 26 19:10:17 crc kubenswrapper[4787]: I0126 19:10:17.630323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f508-account-create-update-2plvr" event={"ID":"158f07c3-118d-4640-85ef-154ad9f2c4e8","Type":"ContainerDied","Data":"347c2e1063162c2ff65c8a5391d8e719ce7c0e8559659d8a2b9d34278980b55b"} Jan 26 19:10:17 crc kubenswrapper[4787]: I0126 19:10:17.958170 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.036117 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr65m\" (UniqueName: \"kubernetes.io/projected/cf090e0a-fc1b-48c1-8678-272ca8ee4901-kube-api-access-mr65m\") pod \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.036169 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf090e0a-fc1b-48c1-8678-272ca8ee4901-operator-scripts\") pod \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\" (UID: \"cf090e0a-fc1b-48c1-8678-272ca8ee4901\") " Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.036843 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf090e0a-fc1b-48c1-8678-272ca8ee4901-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf090e0a-fc1b-48c1-8678-272ca8ee4901" (UID: "cf090e0a-fc1b-48c1-8678-272ca8ee4901"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.046272 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf090e0a-fc1b-48c1-8678-272ca8ee4901-kube-api-access-mr65m" (OuterVolumeSpecName: "kube-api-access-mr65m") pod "cf090e0a-fc1b-48c1-8678-272ca8ee4901" (UID: "cf090e0a-fc1b-48c1-8678-272ca8ee4901"). InnerVolumeSpecName "kube-api-access-mr65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.137777 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr65m\" (UniqueName: \"kubernetes.io/projected/cf090e0a-fc1b-48c1-8678-272ca8ee4901-kube-api-access-mr65m\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.137826 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf090e0a-fc1b-48c1-8678-272ca8ee4901-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.633095 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrnm9" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.633192 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nrnm9" event={"ID":"cf090e0a-fc1b-48c1-8678-272ca8ee4901","Type":"ContainerDied","Data":"9f83c8009699069139aec10007e764d9230fbad557ea009dbb094429416dc098"} Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.633251 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f83c8009699069139aec10007e764d9230fbad557ea009dbb094429416dc098" Jan 26 19:10:18 crc kubenswrapper[4787]: I0126 19:10:18.981342 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.153869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rhzc\" (UniqueName: \"kubernetes.io/projected/158f07c3-118d-4640-85ef-154ad9f2c4e8-kube-api-access-2rhzc\") pod \"158f07c3-118d-4640-85ef-154ad9f2c4e8\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.154196 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f07c3-118d-4640-85ef-154ad9f2c4e8-operator-scripts\") pod \"158f07c3-118d-4640-85ef-154ad9f2c4e8\" (UID: \"158f07c3-118d-4640-85ef-154ad9f2c4e8\") " Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.155153 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158f07c3-118d-4640-85ef-154ad9f2c4e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "158f07c3-118d-4640-85ef-154ad9f2c4e8" (UID: "158f07c3-118d-4640-85ef-154ad9f2c4e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.170808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158f07c3-118d-4640-85ef-154ad9f2c4e8-kube-api-access-2rhzc" (OuterVolumeSpecName: "kube-api-access-2rhzc") pod "158f07c3-118d-4640-85ef-154ad9f2c4e8" (UID: "158f07c3-118d-4640-85ef-154ad9f2c4e8"). InnerVolumeSpecName "kube-api-access-2rhzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.255874 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rhzc\" (UniqueName: \"kubernetes.io/projected/158f07c3-118d-4640-85ef-154ad9f2c4e8-kube-api-access-2rhzc\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.255906 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f07c3-118d-4640-85ef-154ad9f2c4e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.640809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f508-account-create-update-2plvr" event={"ID":"158f07c3-118d-4640-85ef-154ad9f2c4e8","Type":"ContainerDied","Data":"684341ce8fbd3fe0923826163fa4e3cebf5e680a31e209828a975e33c767b318"} Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.640847 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="684341ce8fbd3fe0923826163fa4e3cebf5e680a31e209828a975e33c767b318" Jan 26 19:10:19 crc kubenswrapper[4787]: I0126 19:10:19.640900 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f508-account-create-update-2plvr" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.061442 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-t9r29"] Jan 26 19:10:21 crc kubenswrapper[4787]: E0126 19:10:21.061782 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158f07c3-118d-4640-85ef-154ad9f2c4e8" containerName="mariadb-account-create-update" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.061797 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="158f07c3-118d-4640-85ef-154ad9f2c4e8" containerName="mariadb-account-create-update" Jan 26 19:10:21 crc kubenswrapper[4787]: E0126 19:10:21.061813 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf090e0a-fc1b-48c1-8678-272ca8ee4901" containerName="mariadb-database-create" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.061823 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf090e0a-fc1b-48c1-8678-272ca8ee4901" containerName="mariadb-database-create" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.061999 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf090e0a-fc1b-48c1-8678-272ca8ee4901" containerName="mariadb-database-create" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.062019 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="158f07c3-118d-4640-85ef-154ad9f2c4e8" containerName="mariadb-account-create-update" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.062557 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.065040 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.065268 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.065415 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nczj4" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.065569 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.074547 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t9r29"] Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.086896 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29t4c\" (UniqueName: \"kubernetes.io/projected/497b87c0-2345-4044-a0eb-f620bc564ad0-kube-api-access-29t4c\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.087124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-config-data\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.087264 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-combined-ca-bundle\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.188231 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-combined-ca-bundle\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.188297 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29t4c\" (UniqueName: \"kubernetes.io/projected/497b87c0-2345-4044-a0eb-f620bc564ad0-kube-api-access-29t4c\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.188319 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-config-data\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.195336 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-combined-ca-bundle\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.202889 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-config-data\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.206638 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29t4c\" (UniqueName: \"kubernetes.io/projected/497b87c0-2345-4044-a0eb-f620bc564ad0-kube-api-access-29t4c\") pod \"keystone-db-sync-t9r29\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.389026 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:21 crc kubenswrapper[4787]: I0126 19:10:21.864339 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t9r29"] Jan 26 19:10:22 crc kubenswrapper[4787]: I0126 19:10:22.661216 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t9r29" event={"ID":"497b87c0-2345-4044-a0eb-f620bc564ad0","Type":"ContainerStarted","Data":"4fc99f4e88a1afc4d779c61641f5993141e81174a45c2b7d6b424dc51f135af9"} Jan 26 19:10:22 crc kubenswrapper[4787]: I0126 19:10:22.661545 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t9r29" event={"ID":"497b87c0-2345-4044-a0eb-f620bc564ad0","Type":"ContainerStarted","Data":"dd71e6ac9f823467954f806efb5b02d67005c2d86ad187157e43f93b2494954b"} Jan 26 19:10:23 crc kubenswrapper[4787]: I0126 19:10:23.669562 4787 generic.go:334] "Generic (PLEG): container finished" podID="497b87c0-2345-4044-a0eb-f620bc564ad0" containerID="4fc99f4e88a1afc4d779c61641f5993141e81174a45c2b7d6b424dc51f135af9" exitCode=0 Jan 26 19:10:23 crc kubenswrapper[4787]: I0126 19:10:23.669665 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t9r29" event={"ID":"497b87c0-2345-4044-a0eb-f620bc564ad0","Type":"ContainerDied","Data":"4fc99f4e88a1afc4d779c61641f5993141e81174a45c2b7d6b424dc51f135af9"} Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.032892 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.165412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-config-data\") pod \"497b87c0-2345-4044-a0eb-f620bc564ad0\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.165836 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-combined-ca-bundle\") pod \"497b87c0-2345-4044-a0eb-f620bc564ad0\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.165860 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29t4c\" (UniqueName: \"kubernetes.io/projected/497b87c0-2345-4044-a0eb-f620bc564ad0-kube-api-access-29t4c\") pod \"497b87c0-2345-4044-a0eb-f620bc564ad0\" (UID: \"497b87c0-2345-4044-a0eb-f620bc564ad0\") " Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.170733 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497b87c0-2345-4044-a0eb-f620bc564ad0-kube-api-access-29t4c" (OuterVolumeSpecName: "kube-api-access-29t4c") pod "497b87c0-2345-4044-a0eb-f620bc564ad0" (UID: "497b87c0-2345-4044-a0eb-f620bc564ad0"). InnerVolumeSpecName "kube-api-access-29t4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.206484 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "497b87c0-2345-4044-a0eb-f620bc564ad0" (UID: "497b87c0-2345-4044-a0eb-f620bc564ad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.207550 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-config-data" (OuterVolumeSpecName: "config-data") pod "497b87c0-2345-4044-a0eb-f620bc564ad0" (UID: "497b87c0-2345-4044-a0eb-f620bc564ad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.267579 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.267623 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b87c0-2345-4044-a0eb-f620bc564ad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.267637 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29t4c\" (UniqueName: \"kubernetes.io/projected/497b87c0-2345-4044-a0eb-f620bc564ad0-kube-api-access-29t4c\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.686760 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t9r29" event={"ID":"497b87c0-2345-4044-a0eb-f620bc564ad0","Type":"ContainerDied","Data":"dd71e6ac9f823467954f806efb5b02d67005c2d86ad187157e43f93b2494954b"} Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.686805 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd71e6ac9f823467954f806efb5b02d67005c2d86ad187157e43f93b2494954b" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.686829 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t9r29" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.943251 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549665877f-ts9sr"] Jan 26 19:10:25 crc kubenswrapper[4787]: E0126 19:10:25.943637 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497b87c0-2345-4044-a0eb-f620bc564ad0" containerName="keystone-db-sync" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.943657 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="497b87c0-2345-4044-a0eb-f620bc564ad0" containerName="keystone-db-sync" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.943888 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="497b87c0-2345-4044-a0eb-f620bc564ad0" containerName="keystone-db-sync" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.945396 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.956819 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549665877f-ts9sr"] Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.996594 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qvkj9"] Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.997642 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:25 crc kubenswrapper[4787]: I0126 19:10:25.999076 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.001616 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.001653 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.001841 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.001968 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nczj4" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.007081 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qvkj9"] Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.080936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-sb\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.081308 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-dns-svc\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.081343 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-config\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.081383 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-nb\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.081433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpghd\" (UniqueName: \"kubernetes.io/projected/d2d75cea-2569-4172-9225-38f5b18d35bb-kube-api-access-cpghd\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182442 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-scripts\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182499 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-combined-ca-bundle\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-credential-keys\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182548 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpghd\" (UniqueName: \"kubernetes.io/projected/d2d75cea-2569-4172-9225-38f5b18d35bb-kube-api-access-cpghd\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182580 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-fernet-keys\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182602 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-sb\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182644 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-config-data\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfbz\" (UniqueName: \"kubernetes.io/projected/3ddfac64-89f2-479e-b55a-1eadeb14a435-kube-api-access-cvfbz\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182680 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-dns-svc\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-config\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.182746 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-nb\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.183573 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-nb\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.184281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-sb\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.184371 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-config\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.184686 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-dns-svc\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.201446 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpghd\" (UniqueName: \"kubernetes.io/projected/d2d75cea-2569-4172-9225-38f5b18d35bb-kube-api-access-cpghd\") pod \"dnsmasq-dns-549665877f-ts9sr\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.261515 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.284479 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-fernet-keys\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.284581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-config-data\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.284607 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfbz\" (UniqueName: \"kubernetes.io/projected/3ddfac64-89f2-479e-b55a-1eadeb14a435-kube-api-access-cvfbz\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.284699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-scripts\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.284725 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-combined-ca-bundle\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.284753 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-credential-keys\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.291076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-config-data\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.291315 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-scripts\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.292571 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-fernet-keys\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.295076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-credential-keys\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.297018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-combined-ca-bundle\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.304994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfbz\" (UniqueName: \"kubernetes.io/projected/3ddfac64-89f2-479e-b55a-1eadeb14a435-kube-api-access-cvfbz\") pod \"keystone-bootstrap-qvkj9\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.315861 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.731022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549665877f-ts9sr"] Jan 26 19:10:26 crc kubenswrapper[4787]: W0126 19:10:26.731376 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2d75cea_2569_4172_9225_38f5b18d35bb.slice/crio-09ff5a83e592793f8bc5cbe3a9789e26a1d97963e7dd5e6e2f74720579b165f3 WatchSource:0}: Error finding container 09ff5a83e592793f8bc5cbe3a9789e26a1d97963e7dd5e6e2f74720579b165f3: Status 404 returned error can't find the container with id 09ff5a83e592793f8bc5cbe3a9789e26a1d97963e7dd5e6e2f74720579b165f3 Jan 26 19:10:26 crc kubenswrapper[4787]: I0126 19:10:26.852598 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qvkj9"] Jan 26 19:10:27 crc kubenswrapper[4787]: I0126 19:10:27.699064 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvkj9" event={"ID":"3ddfac64-89f2-479e-b55a-1eadeb14a435","Type":"ContainerStarted","Data":"53a8287007092fec69bec5a4bd0084dddeab30c9bdf73cf5de22f16486d1c7c0"} Jan 26 19:10:27 crc kubenswrapper[4787]: I0126 19:10:27.699584 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvkj9" event={"ID":"3ddfac64-89f2-479e-b55a-1eadeb14a435","Type":"ContainerStarted","Data":"34e0d23f1159b80e456b68341ed7b5aafdf11722e4e0c5a58bcfa86057ec7f79"} Jan 26 19:10:27 crc kubenswrapper[4787]: I0126 19:10:27.700899 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerID="10e47e814b0b473e2e04b1c658976fefe4c54799e431b1e594a828423790865a" exitCode=0 Jan 26 19:10:27 crc kubenswrapper[4787]: I0126 19:10:27.700929 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549665877f-ts9sr" event={"ID":"d2d75cea-2569-4172-9225-38f5b18d35bb","Type":"ContainerDied","Data":"10e47e814b0b473e2e04b1c658976fefe4c54799e431b1e594a828423790865a"} Jan 26 19:10:27 crc kubenswrapper[4787]: I0126 19:10:27.700957 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549665877f-ts9sr" event={"ID":"d2d75cea-2569-4172-9225-38f5b18d35bb","Type":"ContainerStarted","Data":"09ff5a83e592793f8bc5cbe3a9789e26a1d97963e7dd5e6e2f74720579b165f3"} Jan 26 19:10:27 crc kubenswrapper[4787]: I0126 19:10:27.744497 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qvkj9" podStartSLOduration=2.744480702 podStartE2EDuration="2.744480702s" podCreationTimestamp="2026-01-26 19:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:27.742070734 +0000 UTC m=+5196.449206867" watchObservedRunningTime="2026-01-26 19:10:27.744480702 +0000 UTC m=+5196.451616835" Jan 26 19:10:28 crc kubenswrapper[4787]: I0126 19:10:28.711585 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549665877f-ts9sr" event={"ID":"d2d75cea-2569-4172-9225-38f5b18d35bb","Type":"ContainerStarted","Data":"62f57cc0cf85457dbea41ec9ff0772d3cbbae8ba9efb0f0300a598fe2e380737"} Jan 26 19:10:28 crc kubenswrapper[4787]: I0126 19:10:28.736410 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549665877f-ts9sr" podStartSLOduration=3.736385451 podStartE2EDuration="3.736385451s" podCreationTimestamp="2026-01-26 19:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:28.728713195 +0000 UTC m=+5197.435849338" watchObservedRunningTime="2026-01-26 19:10:28.736385451 +0000 UTC m=+5197.443521584" Jan 26 19:10:29 crc kubenswrapper[4787]: I0126 19:10:29.721702 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:30 crc kubenswrapper[4787]: I0126 19:10:30.731239 4787 generic.go:334] "Generic (PLEG): container finished" podID="3ddfac64-89f2-479e-b55a-1eadeb14a435" containerID="53a8287007092fec69bec5a4bd0084dddeab30c9bdf73cf5de22f16486d1c7c0" exitCode=0 Jan 26 19:10:30 crc kubenswrapper[4787]: I0126 19:10:30.731363 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvkj9" event={"ID":"3ddfac64-89f2-479e-b55a-1eadeb14a435","Type":"ContainerDied","Data":"53a8287007092fec69bec5a4bd0084dddeab30c9bdf73cf5de22f16486d1c7c0"} Jan 26 19:10:30 crc kubenswrapper[4787]: I0126 19:10:30.839251 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.112801 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.286269 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-credential-keys\") pod \"3ddfac64-89f2-479e-b55a-1eadeb14a435\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.286314 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-scripts\") pod \"3ddfac64-89f2-479e-b55a-1eadeb14a435\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.286349 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvfbz\" (UniqueName: \"kubernetes.io/projected/3ddfac64-89f2-479e-b55a-1eadeb14a435-kube-api-access-cvfbz\") pod \"3ddfac64-89f2-479e-b55a-1eadeb14a435\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.286406 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-fernet-keys\") pod \"3ddfac64-89f2-479e-b55a-1eadeb14a435\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.286442 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-config-data\") pod \"3ddfac64-89f2-479e-b55a-1eadeb14a435\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.286493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-combined-ca-bundle\") pod \"3ddfac64-89f2-479e-b55a-1eadeb14a435\" (UID: \"3ddfac64-89f2-479e-b55a-1eadeb14a435\") " Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.292723 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ddfac64-89f2-479e-b55a-1eadeb14a435-kube-api-access-cvfbz" (OuterVolumeSpecName: "kube-api-access-cvfbz") pod "3ddfac64-89f2-479e-b55a-1eadeb14a435" (UID: "3ddfac64-89f2-479e-b55a-1eadeb14a435"). InnerVolumeSpecName "kube-api-access-cvfbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.292742 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-scripts" (OuterVolumeSpecName: "scripts") pod "3ddfac64-89f2-479e-b55a-1eadeb14a435" (UID: "3ddfac64-89f2-479e-b55a-1eadeb14a435"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.294066 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3ddfac64-89f2-479e-b55a-1eadeb14a435" (UID: "3ddfac64-89f2-479e-b55a-1eadeb14a435"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.302796 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3ddfac64-89f2-479e-b55a-1eadeb14a435" (UID: "3ddfac64-89f2-479e-b55a-1eadeb14a435"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.309101 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-config-data" (OuterVolumeSpecName: "config-data") pod "3ddfac64-89f2-479e-b55a-1eadeb14a435" (UID: "3ddfac64-89f2-479e-b55a-1eadeb14a435"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.314720 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ddfac64-89f2-479e-b55a-1eadeb14a435" (UID: "3ddfac64-89f2-479e-b55a-1eadeb14a435"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.388329 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.388359 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.388371 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvfbz\" (UniqueName: \"kubernetes.io/projected/3ddfac64-89f2-479e-b55a-1eadeb14a435-kube-api-access-cvfbz\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.388382 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.388396 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.388406 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfac64-89f2-479e-b55a-1eadeb14a435-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.748654 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvkj9" event={"ID":"3ddfac64-89f2-479e-b55a-1eadeb14a435","Type":"ContainerDied","Data":"34e0d23f1159b80e456b68341ed7b5aafdf11722e4e0c5a58bcfa86057ec7f79"} Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.748695 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e0d23f1159b80e456b68341ed7b5aafdf11722e4e0c5a58bcfa86057ec7f79" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.748733 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvkj9" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.841283 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qvkj9"] Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.850674 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qvkj9"] Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.930329 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7bd45"] Jan 26 19:10:32 crc kubenswrapper[4787]: E0126 19:10:32.930709 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfac64-89f2-479e-b55a-1eadeb14a435" containerName="keystone-bootstrap" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.930734 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfac64-89f2-479e-b55a-1eadeb14a435" containerName="keystone-bootstrap" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.930918 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfac64-89f2-479e-b55a-1eadeb14a435" containerName="keystone-bootstrap" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.931570 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.935374 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.935421 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.936692 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.937414 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.937466 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nczj4" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.962085 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7bd45"] Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.997515 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmmw\" (UniqueName: \"kubernetes.io/projected/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-kube-api-access-dnmmw\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.997621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-scripts\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.997675 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-config-data\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.997725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-fernet-keys\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.997762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-combined-ca-bundle\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:32 crc kubenswrapper[4787]: I0126 19:10:32.997850 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-credential-keys\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.099626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-combined-ca-bundle\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.099724 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-credential-keys\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.099757 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnmmw\" (UniqueName: \"kubernetes.io/projected/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-kube-api-access-dnmmw\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.099823 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-scripts\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.099871 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-config-data\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.099903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-fernet-keys\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.103518 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-scripts\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.103865 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-combined-ca-bundle\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.107791 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-fernet-keys\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.114548 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-credential-keys\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.114987 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-config-data\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.120111 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnmmw\" (UniqueName: \"kubernetes.io/projected/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-kube-api-access-dnmmw\") pod \"keystone-bootstrap-7bd45\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.260736 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.601971 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ddfac64-89f2-479e-b55a-1eadeb14a435" path="/var/lib/kubelet/pods/3ddfac64-89f2-479e-b55a-1eadeb14a435/volumes" Jan 26 19:10:33 crc kubenswrapper[4787]: I0126 19:10:33.758213 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7bd45"] Jan 26 19:10:34 crc kubenswrapper[4787]: I0126 19:10:34.769851 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7bd45" event={"ID":"5b42381c-1dc3-4245-9d23-c0eec94f6ae1","Type":"ContainerStarted","Data":"71d0a94f19b26b3bf57fd6be788e894b6ac86e87a5147dbf833144ac05f414c2"} Jan 26 19:10:34 crc kubenswrapper[4787]: I0126 19:10:34.770235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7bd45" event={"ID":"5b42381c-1dc3-4245-9d23-c0eec94f6ae1","Type":"ContainerStarted","Data":"f19ee41bd86a1a83a20dd588bdeb8bcfc8af8075f4f990db9432d72df9f8c368"} Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.262995 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.294602 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7bd45" podStartSLOduration=4.294567952 podStartE2EDuration="4.294567952s" podCreationTimestamp="2026-01-26 19:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:34.798448438 +0000 UTC m=+5203.505584581" watchObservedRunningTime="2026-01-26 19:10:36.294567952 +0000 UTC m=+5205.001704125" Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.351242 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7684477b9f-kgcfc"] Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.351465 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerName="dnsmasq-dns" containerID="cri-o://6fb28ef69a3d5fee9a8ee6faa8ca7754967bb78a4504ae284ef733e8cd69b082" gracePeriod=10 Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.798754 4787 generic.go:334] "Generic (PLEG): container finished" podID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerID="6fb28ef69a3d5fee9a8ee6faa8ca7754967bb78a4504ae284ef733e8cd69b082" exitCode=0 Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.799131 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" event={"ID":"59c34360-12fe-4ad8-a7fc-f66984bfb115","Type":"ContainerDied","Data":"6fb28ef69a3d5fee9a8ee6faa8ca7754967bb78a4504ae284ef733e8cd69b082"} Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.801746 4787 generic.go:334] "Generic (PLEG): container finished" podID="5b42381c-1dc3-4245-9d23-c0eec94f6ae1" containerID="71d0a94f19b26b3bf57fd6be788e894b6ac86e87a5147dbf833144ac05f414c2" exitCode=0 Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.801779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7bd45" event={"ID":"5b42381c-1dc3-4245-9d23-c0eec94f6ae1","Type":"ContainerDied","Data":"71d0a94f19b26b3bf57fd6be788e894b6ac86e87a5147dbf833144ac05f414c2"} Jan 26 19:10:36 crc kubenswrapper[4787]: I0126 19:10:36.947250 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.063807 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjq99\" (UniqueName: \"kubernetes.io/projected/59c34360-12fe-4ad8-a7fc-f66984bfb115-kube-api-access-cjq99\") pod \"59c34360-12fe-4ad8-a7fc-f66984bfb115\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.063904 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-config\") pod \"59c34360-12fe-4ad8-a7fc-f66984bfb115\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.064010 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-sb\") pod \"59c34360-12fe-4ad8-a7fc-f66984bfb115\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.064085 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-nb\") pod \"59c34360-12fe-4ad8-a7fc-f66984bfb115\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.064114 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-dns-svc\") pod \"59c34360-12fe-4ad8-a7fc-f66984bfb115\" (UID: \"59c34360-12fe-4ad8-a7fc-f66984bfb115\") " Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.075201 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c34360-12fe-4ad8-a7fc-f66984bfb115-kube-api-access-cjq99" (OuterVolumeSpecName: "kube-api-access-cjq99") pod "59c34360-12fe-4ad8-a7fc-f66984bfb115" (UID: "59c34360-12fe-4ad8-a7fc-f66984bfb115"). InnerVolumeSpecName "kube-api-access-cjq99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.103761 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59c34360-12fe-4ad8-a7fc-f66984bfb115" (UID: "59c34360-12fe-4ad8-a7fc-f66984bfb115"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.108762 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-config" (OuterVolumeSpecName: "config") pod "59c34360-12fe-4ad8-a7fc-f66984bfb115" (UID: "59c34360-12fe-4ad8-a7fc-f66984bfb115"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.112570 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59c34360-12fe-4ad8-a7fc-f66984bfb115" (UID: "59c34360-12fe-4ad8-a7fc-f66984bfb115"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.123695 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59c34360-12fe-4ad8-a7fc-f66984bfb115" (UID: "59c34360-12fe-4ad8-a7fc-f66984bfb115"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.166292 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjq99\" (UniqueName: \"kubernetes.io/projected/59c34360-12fe-4ad8-a7fc-f66984bfb115-kube-api-access-cjq99\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.166326 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.166335 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.166343 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.166351 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c34360-12fe-4ad8-a7fc-f66984bfb115-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.815337 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" event={"ID":"59c34360-12fe-4ad8-a7fc-f66984bfb115","Type":"ContainerDied","Data":"0475017b8010b27396fe14ee4e1978a99ed4e1a9073151db832371e9a0d6f0a3"} Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.815812 4787 scope.go:117] "RemoveContainer" containerID="6fb28ef69a3d5fee9a8ee6faa8ca7754967bb78a4504ae284ef733e8cd69b082" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.815470 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7684477b9f-kgcfc" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.851657 4787 scope.go:117] "RemoveContainer" containerID="645377a5b1968428fb0faff92ec5e61fc1091b1518b76a3d30be27c3cc3c39b8" Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.863404 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7684477b9f-kgcfc"] Jan 26 19:10:37 crc kubenswrapper[4787]: I0126 19:10:37.877563 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7684477b9f-kgcfc"] Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.174475 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.284846 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-scripts\") pod \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.285025 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-combined-ca-bundle\") pod \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.285081 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-credential-keys\") pod \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.285170 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnmmw\" (UniqueName: \"kubernetes.io/projected/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-kube-api-access-dnmmw\") pod \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.285242 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-fernet-keys\") pod \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.285295 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-config-data\") pod \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\" (UID: \"5b42381c-1dc3-4245-9d23-c0eec94f6ae1\") " Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.289995 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-kube-api-access-dnmmw" (OuterVolumeSpecName: "kube-api-access-dnmmw") pod "5b42381c-1dc3-4245-9d23-c0eec94f6ae1" (UID: "5b42381c-1dc3-4245-9d23-c0eec94f6ae1"). InnerVolumeSpecName "kube-api-access-dnmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.290564 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5b42381c-1dc3-4245-9d23-c0eec94f6ae1" (UID: "5b42381c-1dc3-4245-9d23-c0eec94f6ae1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.291115 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-scripts" (OuterVolumeSpecName: "scripts") pod "5b42381c-1dc3-4245-9d23-c0eec94f6ae1" (UID: "5b42381c-1dc3-4245-9d23-c0eec94f6ae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.291743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5b42381c-1dc3-4245-9d23-c0eec94f6ae1" (UID: "5b42381c-1dc3-4245-9d23-c0eec94f6ae1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.306929 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-config-data" (OuterVolumeSpecName: "config-data") pod "5b42381c-1dc3-4245-9d23-c0eec94f6ae1" (UID: "5b42381c-1dc3-4245-9d23-c0eec94f6ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.314769 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b42381c-1dc3-4245-9d23-c0eec94f6ae1" (UID: "5b42381c-1dc3-4245-9d23-c0eec94f6ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.387029 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnmmw\" (UniqueName: \"kubernetes.io/projected/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-kube-api-access-dnmmw\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.387396 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.387409 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.387417 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.387427 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.387435 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b42381c-1dc3-4245-9d23-c0eec94f6ae1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.833451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7bd45" event={"ID":"5b42381c-1dc3-4245-9d23-c0eec94f6ae1","Type":"ContainerDied","Data":"f19ee41bd86a1a83a20dd588bdeb8bcfc8af8075f4f990db9432d72df9f8c368"} Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.833503 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f19ee41bd86a1a83a20dd588bdeb8bcfc8af8075f4f990db9432d72df9f8c368" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.833551 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7bd45" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.938041 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c9df67c87-rtkk8"] Jan 26 19:10:38 crc kubenswrapper[4787]: E0126 19:10:38.938353 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b42381c-1dc3-4245-9d23-c0eec94f6ae1" containerName="keystone-bootstrap" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.938370 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b42381c-1dc3-4245-9d23-c0eec94f6ae1" containerName="keystone-bootstrap" Jan 26 19:10:38 crc kubenswrapper[4787]: E0126 19:10:38.938381 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerName="init" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.938387 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerName="init" Jan 26 19:10:38 crc kubenswrapper[4787]: E0126 19:10:38.938401 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerName="dnsmasq-dns" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.938408 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerName="dnsmasq-dns" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.938567 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" containerName="dnsmasq-dns" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.938585 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b42381c-1dc3-4245-9d23-c0eec94f6ae1" containerName="keystone-bootstrap" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.939140 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.941285 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.941478 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.942373 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nczj4" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.942544 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:10:38 crc kubenswrapper[4787]: I0126 19:10:38.958141 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c9df67c87-rtkk8"] Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.098226 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-fernet-keys\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.098306 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-config-data\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.098363 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-combined-ca-bundle\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.098434 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnxw\" (UniqueName: \"kubernetes.io/projected/eb9964c8-28d2-4c55-98ca-0cda083cf39d-kube-api-access-2gnxw\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.098496 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-credential-keys\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.098558 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-scripts\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.199591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnxw\" (UniqueName: \"kubernetes.io/projected/eb9964c8-28d2-4c55-98ca-0cda083cf39d-kube-api-access-2gnxw\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.199637 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-credential-keys\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.199689 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-scripts\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.199721 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-fernet-keys\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.199755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-config-data\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.199778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-combined-ca-bundle\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.206981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-credential-keys\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.207199 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-scripts\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.207289 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-config-data\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.207491 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-fernet-keys\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.212760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9964c8-28d2-4c55-98ca-0cda083cf39d-combined-ca-bundle\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.215761 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnxw\" (UniqueName: \"kubernetes.io/projected/eb9964c8-28d2-4c55-98ca-0cda083cf39d-kube-api-access-2gnxw\") pod \"keystone-7c9df67c87-rtkk8\" (UID: \"eb9964c8-28d2-4c55-98ca-0cda083cf39d\") " pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.252688 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.601878 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c34360-12fe-4ad8-a7fc-f66984bfb115" path="/var/lib/kubelet/pods/59c34360-12fe-4ad8-a7fc-f66984bfb115/volumes" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.603655 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c9df67c87-rtkk8"] Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.841271 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c9df67c87-rtkk8" event={"ID":"eb9964c8-28d2-4c55-98ca-0cda083cf39d","Type":"ContainerStarted","Data":"a5654956abca9683e7cf40b07d21dd38855b37f6c8b50777f5117581096628d4"} Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.841652 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.841670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c9df67c87-rtkk8" event={"ID":"eb9964c8-28d2-4c55-98ca-0cda083cf39d","Type":"ContainerStarted","Data":"35d939f71d1a452b24eca0d874616e0b492883358ec997cf9075f04c67efb406"} Jan 26 19:10:39 crc kubenswrapper[4787]: I0126 19:10:39.861095 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c9df67c87-rtkk8" podStartSLOduration=1.861074602 podStartE2EDuration="1.861074602s" podCreationTimestamp="2026-01-26 19:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:10:39.857554466 +0000 UTC m=+5208.564690589" watchObservedRunningTime="2026-01-26 19:10:39.861074602 +0000 UTC m=+5208.568210745" Jan 26 19:10:46 crc kubenswrapper[4787]: I0126 19:10:46.807994 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:10:46 crc kubenswrapper[4787]: I0126 19:10:46.808362 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:11:10 crc kubenswrapper[4787]: I0126 19:11:10.766666 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c9df67c87-rtkk8" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.453907 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.455103 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.459725 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.460088 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-g47vg" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.460279 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.462712 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.503542 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config-secret\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.503600 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kk6n\" (UniqueName: \"kubernetes.io/projected/6362959b-77e9-45ad-b697-6a4978a116d4-kube-api-access-4kk6n\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.503660 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.604588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config-secret\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.604875 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kk6n\" (UniqueName: \"kubernetes.io/projected/6362959b-77e9-45ad-b697-6a4978a116d4-kube-api-access-4kk6n\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.604933 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.605679 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.610268 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config-secret\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.622357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kk6n\" (UniqueName: \"kubernetes.io/projected/6362959b-77e9-45ad-b697-6a4978a116d4-kube-api-access-4kk6n\") pod \"openstackclient\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " pod="openstack/openstackclient" Jan 26 19:11:15 crc kubenswrapper[4787]: I0126 19:11:15.780545 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:11:16 crc kubenswrapper[4787]: I0126 19:11:16.206863 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 19:11:16 crc kubenswrapper[4787]: I0126 19:11:16.308098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6362959b-77e9-45ad-b697-6a4978a116d4","Type":"ContainerStarted","Data":"19336f6defbc38df5a67dd24d50595bad6694ec20c2e1c3ad0ae7db5204a7368"} Jan 26 19:11:16 crc kubenswrapper[4787]: I0126 19:11:16.807809 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:11:16 crc kubenswrapper[4787]: I0126 19:11:16.807877 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:11:17 crc kubenswrapper[4787]: I0126 19:11:17.319756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6362959b-77e9-45ad-b697-6a4978a116d4","Type":"ContainerStarted","Data":"29ed3d6f056a2fe140b016e801c297532d7fb34644b992d85649fb07018896f6"} Jan 26 19:11:17 crc kubenswrapper[4787]: I0126 19:11:17.347683 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.347663247 podStartE2EDuration="2.347663247s" podCreationTimestamp="2026-01-26 19:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:11:17.344518301 +0000 UTC m=+5246.051654464" watchObservedRunningTime="2026-01-26 19:11:17.347663247 +0000 UTC m=+5246.054799390" Jan 26 19:11:46 crc kubenswrapper[4787]: I0126 19:11:46.808257 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:11:46 crc kubenswrapper[4787]: I0126 19:11:46.809024 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:11:46 crc kubenswrapper[4787]: I0126 19:11:46.809091 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:11:46 crc kubenswrapper[4787]: I0126 19:11:46.810051 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:11:46 crc kubenswrapper[4787]: I0126 19:11:46.810140 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" gracePeriod=600 Jan 26 19:11:47 crc kubenswrapper[4787]: E0126 19:11:47.444972 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:11:47 crc kubenswrapper[4787]: I0126 19:11:47.568453 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" exitCode=0 Jan 26 19:11:47 crc kubenswrapper[4787]: I0126 19:11:47.568522 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14"} Jan 26 19:11:47 crc kubenswrapper[4787]: I0126 19:11:47.568815 4787 scope.go:117] "RemoveContainer" containerID="e7ef07c1a92edc30d89515dae5c23c492bf520ebdb32affc92bae7157355d378" Jan 26 19:11:47 crc kubenswrapper[4787]: I0126 19:11:47.569437 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:11:47 crc kubenswrapper[4787]: E0126 19:11:47.569726 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:12:02 crc kubenswrapper[4787]: I0126 19:12:02.589764 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:12:02 crc kubenswrapper[4787]: E0126 19:12:02.590675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:12:17 crc kubenswrapper[4787]: I0126 19:12:17.590129 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:12:17 crc kubenswrapper[4787]: E0126 19:12:17.591050 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:12:30 crc kubenswrapper[4787]: I0126 19:12:30.589260 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:12:30 crc kubenswrapper[4787]: E0126 19:12:30.590007 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:12:41 crc kubenswrapper[4787]: I0126 19:12:41.595000 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:12:41 crc kubenswrapper[4787]: E0126 19:12:41.608543 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:12:52 crc kubenswrapper[4787]: I0126 19:12:52.589305 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:12:52 crc kubenswrapper[4787]: E0126 19:12:52.590010 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.777388 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d4v68"] Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.778827 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.792437 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d4v68"] Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.879503 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a770-account-create-update-ptgbv"] Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.880515 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.888072 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a770-account-create-update-ptgbv"] Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.888732 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.908063 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-operator-scripts\") pod \"barbican-db-create-d4v68\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:53 crc kubenswrapper[4787]: I0126 19:12:53.908130 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxm99\" (UniqueName: \"kubernetes.io/projected/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-kube-api-access-sxm99\") pod \"barbican-db-create-d4v68\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.009838 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzn5\" (UniqueName: \"kubernetes.io/projected/e84d8fb0-d826-48fe-9661-68f05468a9f2-kube-api-access-mdzn5\") pod \"barbican-a770-account-create-update-ptgbv\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.010180 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-operator-scripts\") pod \"barbican-db-create-d4v68\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.010224 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxm99\" (UniqueName: \"kubernetes.io/projected/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-kube-api-access-sxm99\") pod \"barbican-db-create-d4v68\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.010316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84d8fb0-d826-48fe-9661-68f05468a9f2-operator-scripts\") pod \"barbican-a770-account-create-update-ptgbv\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.011008 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-operator-scripts\") pod \"barbican-db-create-d4v68\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.028340 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxm99\" (UniqueName: \"kubernetes.io/projected/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-kube-api-access-sxm99\") pod \"barbican-db-create-d4v68\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.101499 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.112111 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdzn5\" (UniqueName: \"kubernetes.io/projected/e84d8fb0-d826-48fe-9661-68f05468a9f2-kube-api-access-mdzn5\") pod \"barbican-a770-account-create-update-ptgbv\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.112259 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84d8fb0-d826-48fe-9661-68f05468a9f2-operator-scripts\") pod \"barbican-a770-account-create-update-ptgbv\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.113272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84d8fb0-d826-48fe-9661-68f05468a9f2-operator-scripts\") pod \"barbican-a770-account-create-update-ptgbv\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.139004 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdzn5\" (UniqueName: \"kubernetes.io/projected/e84d8fb0-d826-48fe-9661-68f05468a9f2-kube-api-access-mdzn5\") pod \"barbican-a770-account-create-update-ptgbv\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.201773 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.608366 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d4v68"] Jan 26 19:12:54 crc kubenswrapper[4787]: I0126 19:12:54.686115 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a770-account-create-update-ptgbv"] Jan 26 19:12:54 crc kubenswrapper[4787]: W0126 19:12:54.689481 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84d8fb0_d826_48fe_9661_68f05468a9f2.slice/crio-f9720e9c450aacad3dd15b21fa6ed4757237f0b4b5e9f5ff51bbddff8868caa4 WatchSource:0}: Error finding container f9720e9c450aacad3dd15b21fa6ed4757237f0b4b5e9f5ff51bbddff8868caa4: Status 404 returned error can't find the container with id f9720e9c450aacad3dd15b21fa6ed4757237f0b4b5e9f5ff51bbddff8868caa4 Jan 26 19:12:55 crc kubenswrapper[4787]: I0126 19:12:55.105634 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4v68" event={"ID":"465cd4b3-ac1f-4e89-b560-6e40d2754fe3","Type":"ContainerStarted","Data":"4a25a4e20c9fdd28a118b787d674dad975b4ae96767d4fbdaafdb3cb72efc7e7"} Jan 26 19:12:55 crc kubenswrapper[4787]: I0126 19:12:55.106287 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4v68" event={"ID":"465cd4b3-ac1f-4e89-b560-6e40d2754fe3","Type":"ContainerStarted","Data":"60dd1e16efe722320e75d82d651bc4a2c5dbe07cc6af0021f8bb60bd613c9e6b"} Jan 26 19:12:55 crc kubenswrapper[4787]: I0126 19:12:55.108578 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a770-account-create-update-ptgbv" event={"ID":"e84d8fb0-d826-48fe-9661-68f05468a9f2","Type":"ContainerStarted","Data":"754b844e6149d8492303332ebc91243c8e6dcf1faff8523fceb632f392eb6d8d"} Jan 26 19:12:55 crc kubenswrapper[4787]: I0126 19:12:55.108650 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a770-account-create-update-ptgbv" event={"ID":"e84d8fb0-d826-48fe-9661-68f05468a9f2","Type":"ContainerStarted","Data":"f9720e9c450aacad3dd15b21fa6ed4757237f0b4b5e9f5ff51bbddff8868caa4"} Jan 26 19:12:55 crc kubenswrapper[4787]: I0126 19:12:55.124912 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-d4v68" podStartSLOduration=2.124884344 podStartE2EDuration="2.124884344s" podCreationTimestamp="2026-01-26 19:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:12:55.12308246 +0000 UTC m=+5343.830218593" watchObservedRunningTime="2026-01-26 19:12:55.124884344 +0000 UTC m=+5343.832020467" Jan 26 19:12:55 crc kubenswrapper[4787]: I0126 19:12:55.140425 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a770-account-create-update-ptgbv" podStartSLOduration=2.140408281 podStartE2EDuration="2.140408281s" podCreationTimestamp="2026-01-26 19:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:12:55.140307789 +0000 UTC m=+5343.847443922" watchObservedRunningTime="2026-01-26 19:12:55.140408281 +0000 UTC m=+5343.847544414" Jan 26 19:12:58 crc kubenswrapper[4787]: I0126 19:12:58.131676 4787 generic.go:334] "Generic (PLEG): container finished" podID="e84d8fb0-d826-48fe-9661-68f05468a9f2" containerID="754b844e6149d8492303332ebc91243c8e6dcf1faff8523fceb632f392eb6d8d" exitCode=0 Jan 26 19:12:58 crc kubenswrapper[4787]: I0126 19:12:58.131883 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a770-account-create-update-ptgbv" event={"ID":"e84d8fb0-d826-48fe-9661-68f05468a9f2","Type":"ContainerDied","Data":"754b844e6149d8492303332ebc91243c8e6dcf1faff8523fceb632f392eb6d8d"} Jan 26 19:12:58 crc kubenswrapper[4787]: I0126 19:12:58.133271 4787 generic.go:334] "Generic (PLEG): container finished" podID="465cd4b3-ac1f-4e89-b560-6e40d2754fe3" containerID="4a25a4e20c9fdd28a118b787d674dad975b4ae96767d4fbdaafdb3cb72efc7e7" exitCode=0 Jan 26 19:12:58 crc kubenswrapper[4787]: I0126 19:12:58.133306 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4v68" event={"ID":"465cd4b3-ac1f-4e89-b560-6e40d2754fe3","Type":"ContainerDied","Data":"4a25a4e20c9fdd28a118b787d674dad975b4ae96767d4fbdaafdb3cb72efc7e7"} Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.512490 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.526736 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4v68" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.609524 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxm99\" (UniqueName: \"kubernetes.io/projected/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-kube-api-access-sxm99\") pod \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.609667 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-operator-scripts\") pod \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\" (UID: \"465cd4b3-ac1f-4e89-b560-6e40d2754fe3\") " Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.609712 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84d8fb0-d826-48fe-9661-68f05468a9f2-operator-scripts\") pod \"e84d8fb0-d826-48fe-9661-68f05468a9f2\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.609740 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdzn5\" (UniqueName: \"kubernetes.io/projected/e84d8fb0-d826-48fe-9661-68f05468a9f2-kube-api-access-mdzn5\") pod \"e84d8fb0-d826-48fe-9661-68f05468a9f2\" (UID: \"e84d8fb0-d826-48fe-9661-68f05468a9f2\") " Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.611333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "465cd4b3-ac1f-4e89-b560-6e40d2754fe3" (UID: "465cd4b3-ac1f-4e89-b560-6e40d2754fe3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.611759 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84d8fb0-d826-48fe-9661-68f05468a9f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e84d8fb0-d826-48fe-9661-68f05468a9f2" (UID: "e84d8fb0-d826-48fe-9661-68f05468a9f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.617316 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-kube-api-access-sxm99" (OuterVolumeSpecName: "kube-api-access-sxm99") pod "465cd4b3-ac1f-4e89-b560-6e40d2754fe3" (UID: "465cd4b3-ac1f-4e89-b560-6e40d2754fe3"). InnerVolumeSpecName "kube-api-access-sxm99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.617879 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84d8fb0-d826-48fe-9661-68f05468a9f2-kube-api-access-mdzn5" (OuterVolumeSpecName: "kube-api-access-mdzn5") pod "e84d8fb0-d826-48fe-9661-68f05468a9f2" (UID: "e84d8fb0-d826-48fe-9661-68f05468a9f2"). InnerVolumeSpecName "kube-api-access-mdzn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.712317 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxm99\" (UniqueName: \"kubernetes.io/projected/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-kube-api-access-sxm99\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.712352 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465cd4b3-ac1f-4e89-b560-6e40d2754fe3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.712364 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84d8fb0-d826-48fe-9661-68f05468a9f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:59 crc kubenswrapper[4787]: I0126 19:12:59.712377 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdzn5\" (UniqueName: \"kubernetes.io/projected/e84d8fb0-d826-48fe-9661-68f05468a9f2-kube-api-access-mdzn5\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:00 crc kubenswrapper[4787]: I0126 19:13:00.149659 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4v68" event={"ID":"465cd4b3-ac1f-4e89-b560-6e40d2754fe3","Type":"ContainerDied","Data":"60dd1e16efe722320e75d82d651bc4a2c5dbe07cc6af0021f8bb60bd613c9e6b"} Jan 26 19:13:00 crc kubenswrapper[4787]: I0126 19:13:00.149696 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4v68" Jan 26 19:13:00 crc kubenswrapper[4787]: I0126 19:13:00.149706 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60dd1e16efe722320e75d82d651bc4a2c5dbe07cc6af0021f8bb60bd613c9e6b" Jan 26 19:13:00 crc kubenswrapper[4787]: I0126 19:13:00.152567 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a770-account-create-update-ptgbv" event={"ID":"e84d8fb0-d826-48fe-9661-68f05468a9f2","Type":"ContainerDied","Data":"f9720e9c450aacad3dd15b21fa6ed4757237f0b4b5e9f5ff51bbddff8868caa4"} Jan 26 19:13:00 crc kubenswrapper[4787]: I0126 19:13:00.152609 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9720e9c450aacad3dd15b21fa6ed4757237f0b4b5e9f5ff51bbddff8868caa4" Jan 26 19:13:00 crc kubenswrapper[4787]: I0126 19:13:00.152695 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a770-account-create-update-ptgbv" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.229454 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pmknk"] Jan 26 19:13:04 crc kubenswrapper[4787]: E0126 19:13:04.229886 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465cd4b3-ac1f-4e89-b560-6e40d2754fe3" containerName="mariadb-database-create" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.230163 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="465cd4b3-ac1f-4e89-b560-6e40d2754fe3" containerName="mariadb-database-create" Jan 26 19:13:04 crc kubenswrapper[4787]: E0126 19:13:04.230182 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84d8fb0-d826-48fe-9661-68f05468a9f2" containerName="mariadb-account-create-update" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.230191 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84d8fb0-d826-48fe-9661-68f05468a9f2" containerName="mariadb-account-create-update" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.230391 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84d8fb0-d826-48fe-9661-68f05468a9f2" containerName="mariadb-account-create-update" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.230407 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="465cd4b3-ac1f-4e89-b560-6e40d2754fe3" containerName="mariadb-database-create" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.231178 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.233748 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kfjzf" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.233969 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.252094 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pmknk"] Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.384748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9r4\" (UniqueName: \"kubernetes.io/projected/ad2d4cef-d169-474d-a245-7e489c22cb6c-kube-api-access-4r9r4\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.384874 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-combined-ca-bundle\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.384921 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-db-sync-config-data\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.486695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r9r4\" (UniqueName: \"kubernetes.io/projected/ad2d4cef-d169-474d-a245-7e489c22cb6c-kube-api-access-4r9r4\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.486830 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-combined-ca-bundle\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.486885 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-db-sync-config-data\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.493784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-combined-ca-bundle\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.500995 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-db-sync-config-data\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.506115 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r9r4\" (UniqueName: \"kubernetes.io/projected/ad2d4cef-d169-474d-a245-7e489c22cb6c-kube-api-access-4r9r4\") pod \"barbican-db-sync-pmknk\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:04 crc kubenswrapper[4787]: I0126 19:13:04.566209 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:05 crc kubenswrapper[4787]: I0126 19:13:05.031383 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pmknk"] Jan 26 19:13:05 crc kubenswrapper[4787]: I0126 19:13:05.195412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pmknk" event={"ID":"ad2d4cef-d169-474d-a245-7e489c22cb6c","Type":"ContainerStarted","Data":"34e2b39cdff5e091b09e1730f41b3634ef4c63d7f9969c0dbcb1a4dc158d6fce"} Jan 26 19:13:06 crc kubenswrapper[4787]: I0126 19:13:06.203536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pmknk" event={"ID":"ad2d4cef-d169-474d-a245-7e489c22cb6c","Type":"ContainerStarted","Data":"b86265110814e8c696c61160b2c3a0f97de67376505895f05bea4d9b3691c377"} Jan 26 19:13:06 crc kubenswrapper[4787]: I0126 19:13:06.226023 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pmknk" podStartSLOduration=2.225999449 podStartE2EDuration="2.225999449s" podCreationTimestamp="2026-01-26 19:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:06.224219296 +0000 UTC m=+5354.931355439" watchObservedRunningTime="2026-01-26 19:13:06.225999449 +0000 UTC m=+5354.933135582" Jan 26 19:13:07 crc kubenswrapper[4787]: I0126 19:13:07.213940 4787 generic.go:334] "Generic (PLEG): container finished" podID="ad2d4cef-d169-474d-a245-7e489c22cb6c" containerID="b86265110814e8c696c61160b2c3a0f97de67376505895f05bea4d9b3691c377" exitCode=0 Jan 26 19:13:07 crc kubenswrapper[4787]: I0126 19:13:07.214004 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pmknk" event={"ID":"ad2d4cef-d169-474d-a245-7e489c22cb6c","Type":"ContainerDied","Data":"b86265110814e8c696c61160b2c3a0f97de67376505895f05bea4d9b3691c377"} Jan 26 19:13:07 crc kubenswrapper[4787]: I0126 19:13:07.589491 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:13:07 crc kubenswrapper[4787]: E0126 19:13:07.589735 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.546141 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.663391 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-combined-ca-bundle\") pod \"ad2d4cef-d169-474d-a245-7e489c22cb6c\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.663490 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-db-sync-config-data\") pod \"ad2d4cef-d169-474d-a245-7e489c22cb6c\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.663549 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r9r4\" (UniqueName: \"kubernetes.io/projected/ad2d4cef-d169-474d-a245-7e489c22cb6c-kube-api-access-4r9r4\") pod \"ad2d4cef-d169-474d-a245-7e489c22cb6c\" (UID: \"ad2d4cef-d169-474d-a245-7e489c22cb6c\") " Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.679132 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad2d4cef-d169-474d-a245-7e489c22cb6c" (UID: "ad2d4cef-d169-474d-a245-7e489c22cb6c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.682275 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2d4cef-d169-474d-a245-7e489c22cb6c-kube-api-access-4r9r4" (OuterVolumeSpecName: "kube-api-access-4r9r4") pod "ad2d4cef-d169-474d-a245-7e489c22cb6c" (UID: "ad2d4cef-d169-474d-a245-7e489c22cb6c"). InnerVolumeSpecName "kube-api-access-4r9r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.694354 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad2d4cef-d169-474d-a245-7e489c22cb6c" (UID: "ad2d4cef-d169-474d-a245-7e489c22cb6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.765219 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r9r4\" (UniqueName: \"kubernetes.io/projected/ad2d4cef-d169-474d-a245-7e489c22cb6c-kube-api-access-4r9r4\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.765271 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:08 crc kubenswrapper[4787]: I0126 19:13:08.765289 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad2d4cef-d169-474d-a245-7e489c22cb6c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.237047 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pmknk" event={"ID":"ad2d4cef-d169-474d-a245-7e489c22cb6c","Type":"ContainerDied","Data":"34e2b39cdff5e091b09e1730f41b3634ef4c63d7f9969c0dbcb1a4dc158d6fce"} Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.237085 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e2b39cdff5e091b09e1730f41b3634ef4c63d7f9969c0dbcb1a4dc158d6fce" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.237106 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pmknk" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.353628 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-b54957c57-vn9rc"] Jan 26 19:13:09 crc kubenswrapper[4787]: E0126 19:13:09.354004 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2d4cef-d169-474d-a245-7e489c22cb6c" containerName="barbican-db-sync" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.354023 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2d4cef-d169-474d-a245-7e489c22cb6c" containerName="barbican-db-sync" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.354206 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2d4cef-d169-474d-a245-7e489c22cb6c" containerName="barbican-db-sync" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.355780 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.363814 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.365770 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kfjzf" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.366549 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.375265 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b54957c57-vn9rc"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.405516 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6997547fdb-fnvb9"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.407388 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.410136 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.430742 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6997547fdb-fnvb9"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.475935 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-config-data-custom\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.476017 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dp9f\" (UniqueName: \"kubernetes.io/projected/dfb9e125-ffc8-4211-b147-04adec3df7ac-kube-api-access-4dp9f\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.476094 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-config-data\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.476124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-combined-ca-bundle\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.476155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb9e125-ffc8-4211-b147-04adec3df7ac-logs\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.511581 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb55df559-8jjs5"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.512751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.541295 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb55df559-8jjs5"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580502 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-config-data\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-combined-ca-bundle\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580627 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbng5\" (UniqueName: \"kubernetes.io/projected/0e794d83-0f4e-4111-8873-23376c85c1d8-kube-api-access-jbng5\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e794d83-0f4e-4111-8873-23376c85c1d8-logs\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580683 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb9e125-ffc8-4211-b147-04adec3df7ac-logs\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580713 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-config-data\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-config-data-custom\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580785 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dp9f\" (UniqueName: \"kubernetes.io/projected/dfb9e125-ffc8-4211-b147-04adec3df7ac-kube-api-access-4dp9f\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580850 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-config-data-custom\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.580879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-combined-ca-bundle\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.582045 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb9e125-ffc8-4211-b147-04adec3df7ac-logs\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.608181 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-combined-ca-bundle\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.609141 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-config-data-custom\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.618567 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb9e125-ffc8-4211-b147-04adec3df7ac-config-data\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.638691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dp9f\" (UniqueName: \"kubernetes.io/projected/dfb9e125-ffc8-4211-b147-04adec3df7ac-kube-api-access-4dp9f\") pod \"barbican-worker-b54957c57-vn9rc\" (UID: \"dfb9e125-ffc8-4211-b147-04adec3df7ac\") " pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.674557 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b54957c57-vn9rc" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.685865 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-dns-svc\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.685933 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.685985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-config-data-custom\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686011 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-combined-ca-bundle\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-config\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686058 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4d9\" (UniqueName: \"kubernetes.io/projected/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-kube-api-access-sr4d9\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686137 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbng5\" (UniqueName: \"kubernetes.io/projected/0e794d83-0f4e-4111-8873-23376c85c1d8-kube-api-access-jbng5\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686176 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e794d83-0f4e-4111-8873-23376c85c1d8-logs\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.686202 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-config-data\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.691223 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e794d83-0f4e-4111-8873-23376c85c1d8-logs\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.692172 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-config-data-custom\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.693141 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-config-data\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.697547 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e794d83-0f4e-4111-8873-23376c85c1d8-combined-ca-bundle\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.723781 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-dd98956c6-86mmx"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.725235 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.731859 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.745701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dd98956c6-86mmx"] Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.752837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbng5\" (UniqueName: \"kubernetes.io/projected/0e794d83-0f4e-4111-8873-23376c85c1d8-kube-api-access-jbng5\") pod \"barbican-keystone-listener-6997547fdb-fnvb9\" (UID: \"0e794d83-0f4e-4111-8873-23376c85c1d8\") " pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.760775 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.788850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.788923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-config\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.788979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4d9\" (UniqueName: \"kubernetes.io/projected/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-kube-api-access-sr4d9\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.789038 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.789082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-dns-svc\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.789912 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-dns-svc\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.790465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.790994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-config\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.791820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.836807 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4d9\" (UniqueName: \"kubernetes.io/projected/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-kube-api-access-sr4d9\") pod \"dnsmasq-dns-5bb55df559-8jjs5\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.837263 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.890729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-combined-ca-bundle\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.891132 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8gl\" (UniqueName: \"kubernetes.io/projected/e19143c4-6c81-4684-8712-ba99d98ba256-kube-api-access-dl8gl\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.891287 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-config-data\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.891364 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19143c4-6c81-4684-8712-ba99d98ba256-logs\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.891401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-config-data-custom\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.993226 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8gl\" (UniqueName: \"kubernetes.io/projected/e19143c4-6c81-4684-8712-ba99d98ba256-kube-api-access-dl8gl\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.993595 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-config-data\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.993646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19143c4-6c81-4684-8712-ba99d98ba256-logs\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.993675 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-config-data-custom\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:09 crc kubenswrapper[4787]: I0126 19:13:09.993717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-combined-ca-bundle\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.000564 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19143c4-6c81-4684-8712-ba99d98ba256-logs\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.001824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-combined-ca-bundle\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.010247 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-config-data\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.011064 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8gl\" (UniqueName: \"kubernetes.io/projected/e19143c4-6c81-4684-8712-ba99d98ba256-kube-api-access-dl8gl\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.012445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e19143c4-6c81-4684-8712-ba99d98ba256-config-data-custom\") pod \"barbican-api-dd98956c6-86mmx\" (UID: \"e19143c4-6c81-4684-8712-ba99d98ba256\") " pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.114638 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.362461 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6997547fdb-fnvb9"] Jan 26 19:13:10 crc kubenswrapper[4787]: W0126 19:13:10.373074 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfb9e125_ffc8_4211_b147_04adec3df7ac.slice/crio-155d0ec92993fbc37bb6c604578aa69a9212bf3e887e1a9fe45ecab38c0fa1c4 WatchSource:0}: Error finding container 155d0ec92993fbc37bb6c604578aa69a9212bf3e887e1a9fe45ecab38c0fa1c4: Status 404 returned error can't find the container with id 155d0ec92993fbc37bb6c604578aa69a9212bf3e887e1a9fe45ecab38c0fa1c4 Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.378203 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b54957c57-vn9rc"] Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.456162 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb55df559-8jjs5"] Jan 26 19:13:10 crc kubenswrapper[4787]: W0126 19:13:10.463779 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7841c8e0_74ad_4638_b2ef_326a9bcd00fb.slice/crio-42cf7c2efc1198f6aadf8f430dc0ee5dea20fd87fd43e16ae851f66dce3c983f WatchSource:0}: Error finding container 42cf7c2efc1198f6aadf8f430dc0ee5dea20fd87fd43e16ae851f66dce3c983f: Status 404 returned error can't find the container with id 42cf7c2efc1198f6aadf8f430dc0ee5dea20fd87fd43e16ae851f66dce3c983f Jan 26 19:13:10 crc kubenswrapper[4787]: I0126 19:13:10.597669 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dd98956c6-86mmx"] Jan 26 19:13:10 crc kubenswrapper[4787]: W0126 19:13:10.604644 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19143c4_6c81_4684_8712_ba99d98ba256.slice/crio-a1cc2be37eb5eaa2bd315cb7db1d28141de2246d7b3aa15f576c428857fdc3c3 WatchSource:0}: Error finding container a1cc2be37eb5eaa2bd315cb7db1d28141de2246d7b3aa15f576c428857fdc3c3: Status 404 returned error can't find the container with id a1cc2be37eb5eaa2bd315cb7db1d28141de2246d7b3aa15f576c428857fdc3c3 Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.262930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dd98956c6-86mmx" event={"ID":"e19143c4-6c81-4684-8712-ba99d98ba256","Type":"ContainerStarted","Data":"c7873f5c85c1c2a52f596f16d3969b135e606c42b785d2f46578ca96d79a9ebb"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.263263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dd98956c6-86mmx" event={"ID":"e19143c4-6c81-4684-8712-ba99d98ba256","Type":"ContainerStarted","Data":"b1529ddbb27a01d425b3cb20d9c4e53930d80429ac18fa22c53b193c5bd39554"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.263282 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.263294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dd98956c6-86mmx" event={"ID":"e19143c4-6c81-4684-8712-ba99d98ba256","Type":"ContainerStarted","Data":"a1cc2be37eb5eaa2bd315cb7db1d28141de2246d7b3aa15f576c428857fdc3c3"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.263305 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.265323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" event={"ID":"0e794d83-0f4e-4111-8873-23376c85c1d8","Type":"ContainerStarted","Data":"559a24ca54668dc8595b69e4a8daea0f99e8ff64905f3311cc3fffdaa1e1477f"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.265366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" event={"ID":"0e794d83-0f4e-4111-8873-23376c85c1d8","Type":"ContainerStarted","Data":"d1d43c087907f8be1ca8d4f4dd14746e5bee0c34341bd03b956d3b7acb53a1f3"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.265384 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" event={"ID":"0e794d83-0f4e-4111-8873-23376c85c1d8","Type":"ContainerStarted","Data":"a8cb0d0027b69949561e8a859e9bd668b72212ea598d540393cb2bb895552df8"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.268375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b54957c57-vn9rc" event={"ID":"dfb9e125-ffc8-4211-b147-04adec3df7ac","Type":"ContainerStarted","Data":"f0cb91e6d9f0b384686ed0f71deee24d2974244bcc4ba8c63b2591410a9784f1"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.268419 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b54957c57-vn9rc" event={"ID":"dfb9e125-ffc8-4211-b147-04adec3df7ac","Type":"ContainerStarted","Data":"72d300fefb2f49c9beb6033ffdff6d49798a219e4903943b2d4fc8c94606d00c"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.268432 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b54957c57-vn9rc" event={"ID":"dfb9e125-ffc8-4211-b147-04adec3df7ac","Type":"ContainerStarted","Data":"155d0ec92993fbc37bb6c604578aa69a9212bf3e887e1a9fe45ecab38c0fa1c4"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.270769 4787 generic.go:334] "Generic (PLEG): container finished" podID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerID="ed418755fad2ce15fa67dcfaaabc7c223fe99c03e3b9bc868ac52725f3221c6d" exitCode=0 Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.270816 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" event={"ID":"7841c8e0-74ad-4638-b2ef-326a9bcd00fb","Type":"ContainerDied","Data":"ed418755fad2ce15fa67dcfaaabc7c223fe99c03e3b9bc868ac52725f3221c6d"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.270856 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" event={"ID":"7841c8e0-74ad-4638-b2ef-326a9bcd00fb","Type":"ContainerStarted","Data":"42cf7c2efc1198f6aadf8f430dc0ee5dea20fd87fd43e16ae851f66dce3c983f"} Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.287031 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-dd98956c6-86mmx" podStartSLOduration=2.287011604 podStartE2EDuration="2.287011604s" podCreationTimestamp="2026-01-26 19:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:11.282856833 +0000 UTC m=+5359.989992966" watchObservedRunningTime="2026-01-26 19:13:11.287011604 +0000 UTC m=+5359.994147737" Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.314751 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6997547fdb-fnvb9" podStartSLOduration=2.314733238 podStartE2EDuration="2.314733238s" podCreationTimestamp="2026-01-26 19:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:11.314195485 +0000 UTC m=+5360.021331628" watchObservedRunningTime="2026-01-26 19:13:11.314733238 +0000 UTC m=+5360.021869381" Jan 26 19:13:11 crc kubenswrapper[4787]: I0126 19:13:11.367110 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-b54957c57-vn9rc" podStartSLOduration=2.36708788 podStartE2EDuration="2.36708788s" podCreationTimestamp="2026-01-26 19:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:11.365486371 +0000 UTC m=+5360.072622504" watchObservedRunningTime="2026-01-26 19:13:11.36708788 +0000 UTC m=+5360.074224013" Jan 26 19:13:12 crc kubenswrapper[4787]: I0126 19:13:12.280310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" event={"ID":"7841c8e0-74ad-4638-b2ef-326a9bcd00fb","Type":"ContainerStarted","Data":"484a85ea505224c5ad8e6639403454141916f7a38e154c095439738d06fb911f"} Jan 26 19:13:12 crc kubenswrapper[4787]: I0126 19:13:12.305742 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" podStartSLOduration=3.305724475 podStartE2EDuration="3.305724475s" podCreationTimestamp="2026-01-26 19:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:12.305312154 +0000 UTC m=+5361.012448277" watchObservedRunningTime="2026-01-26 19:13:12.305724475 +0000 UTC m=+5361.012860608" Jan 26 19:13:13 crc kubenswrapper[4787]: I0126 19:13:13.301432 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:16 crc kubenswrapper[4787]: I0126 19:13:16.641012 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:18 crc kubenswrapper[4787]: I0126 19:13:18.109011 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dd98956c6-86mmx" Jan 26 19:13:18 crc kubenswrapper[4787]: I0126 19:13:18.589286 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:13:18 crc kubenswrapper[4787]: E0126 19:13:18.589514 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:13:19 crc kubenswrapper[4787]: I0126 19:13:19.839437 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:19 crc kubenswrapper[4787]: I0126 19:13:19.921005 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549665877f-ts9sr"] Jan 26 19:13:19 crc kubenswrapper[4787]: I0126 19:13:19.921244 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549665877f-ts9sr" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerName="dnsmasq-dns" containerID="cri-o://62f57cc0cf85457dbea41ec9ff0772d3cbbae8ba9efb0f0300a598fe2e380737" gracePeriod=10 Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.370978 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerID="62f57cc0cf85457dbea41ec9ff0772d3cbbae8ba9efb0f0300a598fe2e380737" exitCode=0 Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.371054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549665877f-ts9sr" event={"ID":"d2d75cea-2569-4172-9225-38f5b18d35bb","Type":"ContainerDied","Data":"62f57cc0cf85457dbea41ec9ff0772d3cbbae8ba9efb0f0300a598fe2e380737"} Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.456483 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.582251 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-nb\") pod \"d2d75cea-2569-4172-9225-38f5b18d35bb\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.583363 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-dns-svc\") pod \"d2d75cea-2569-4172-9225-38f5b18d35bb\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.583440 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-config\") pod \"d2d75cea-2569-4172-9225-38f5b18d35bb\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.583874 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpghd\" (UniqueName: \"kubernetes.io/projected/d2d75cea-2569-4172-9225-38f5b18d35bb-kube-api-access-cpghd\") pod \"d2d75cea-2569-4172-9225-38f5b18d35bb\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.583940 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-sb\") pod \"d2d75cea-2569-4172-9225-38f5b18d35bb\" (UID: \"d2d75cea-2569-4172-9225-38f5b18d35bb\") " Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.604234 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d75cea-2569-4172-9225-38f5b18d35bb-kube-api-access-cpghd" (OuterVolumeSpecName: "kube-api-access-cpghd") pod "d2d75cea-2569-4172-9225-38f5b18d35bb" (UID: "d2d75cea-2569-4172-9225-38f5b18d35bb"). InnerVolumeSpecName "kube-api-access-cpghd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.674654 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2d75cea-2569-4172-9225-38f5b18d35bb" (UID: "d2d75cea-2569-4172-9225-38f5b18d35bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.685919 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.685965 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpghd\" (UniqueName: \"kubernetes.io/projected/d2d75cea-2569-4172-9225-38f5b18d35bb-kube-api-access-cpghd\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.693227 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-spjxb"] Jan 26 19:13:20 crc kubenswrapper[4787]: E0126 19:13:20.693843 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerName="dnsmasq-dns" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.693923 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerName="dnsmasq-dns" Jan 26 19:13:20 crc kubenswrapper[4787]: E0126 19:13:20.694047 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerName="init" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.694114 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerName="init" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.694414 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" containerName="dnsmasq-dns" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.697005 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.698995 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-spjxb"] Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.711258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2d75cea-2569-4172-9225-38f5b18d35bb" (UID: "d2d75cea-2569-4172-9225-38f5b18d35bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.713968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-config" (OuterVolumeSpecName: "config") pod "d2d75cea-2569-4172-9225-38f5b18d35bb" (UID: "d2d75cea-2569-4172-9225-38f5b18d35bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.742034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2d75cea-2569-4172-9225-38f5b18d35bb" (UID: "d2d75cea-2569-4172-9225-38f5b18d35bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.790962 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btt7j\" (UniqueName: \"kubernetes.io/projected/e274a7dc-5150-47c2-a61a-9b1fefa7a241-kube-api-access-btt7j\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.791068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-catalog-content\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.791122 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-utilities\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.791176 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.791186 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.791196 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d75cea-2569-4172-9225-38f5b18d35bb-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.892340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-catalog-content\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.892435 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-utilities\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.892485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btt7j\" (UniqueName: \"kubernetes.io/projected/e274a7dc-5150-47c2-a61a-9b1fefa7a241-kube-api-access-btt7j\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.893193 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-catalog-content\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.893224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-utilities\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:20 crc kubenswrapper[4787]: I0126 19:13:20.910677 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btt7j\" (UniqueName: \"kubernetes.io/projected/e274a7dc-5150-47c2-a61a-9b1fefa7a241-kube-api-access-btt7j\") pod \"community-operators-spjxb\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.049467 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.389574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549665877f-ts9sr" event={"ID":"d2d75cea-2569-4172-9225-38f5b18d35bb","Type":"ContainerDied","Data":"09ff5a83e592793f8bc5cbe3a9789e26a1d97963e7dd5e6e2f74720579b165f3"} Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.390033 4787 scope.go:117] "RemoveContainer" containerID="62f57cc0cf85457dbea41ec9ff0772d3cbbae8ba9efb0f0300a598fe2e380737" Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.390087 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549665877f-ts9sr" Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.419589 4787 scope.go:117] "RemoveContainer" containerID="10e47e814b0b473e2e04b1c658976fefe4c54799e431b1e594a828423790865a" Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.434152 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549665877f-ts9sr"] Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.442164 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549665877f-ts9sr"] Jan 26 19:13:21 crc kubenswrapper[4787]: W0126 19:13:21.552195 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode274a7dc_5150_47c2_a61a_9b1fefa7a241.slice/crio-e495765ce92456177221bf1dd4661522dfa7b6108846023735a77bfbe59cc224 WatchSource:0}: Error finding container e495765ce92456177221bf1dd4661522dfa7b6108846023735a77bfbe59cc224: Status 404 returned error can't find the container with id e495765ce92456177221bf1dd4661522dfa7b6108846023735a77bfbe59cc224 Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.556027 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-spjxb"] Jan 26 19:13:21 crc kubenswrapper[4787]: I0126 19:13:21.600051 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d75cea-2569-4172-9225-38f5b18d35bb" path="/var/lib/kubelet/pods/d2d75cea-2569-4172-9225-38f5b18d35bb/volumes" Jan 26 19:13:22 crc kubenswrapper[4787]: I0126 19:13:22.408289 4787 generic.go:334] "Generic (PLEG): container finished" podID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerID="3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682" exitCode=0 Jan 26 19:13:22 crc kubenswrapper[4787]: I0126 19:13:22.408500 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spjxb" event={"ID":"e274a7dc-5150-47c2-a61a-9b1fefa7a241","Type":"ContainerDied","Data":"3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682"} Jan 26 19:13:22 crc kubenswrapper[4787]: I0126 19:13:22.408983 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spjxb" event={"ID":"e274a7dc-5150-47c2-a61a-9b1fefa7a241","Type":"ContainerStarted","Data":"e495765ce92456177221bf1dd4661522dfa7b6108846023735a77bfbe59cc224"} Jan 26 19:13:24 crc kubenswrapper[4787]: I0126 19:13:24.057633 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kr22b"] Jan 26 19:13:24 crc kubenswrapper[4787]: I0126 19:13:24.064291 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kr22b"] Jan 26 19:13:24 crc kubenswrapper[4787]: I0126 19:13:24.430614 4787 generic.go:334] "Generic (PLEG): container finished" podID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerID="cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5" exitCode=0 Jan 26 19:13:24 crc kubenswrapper[4787]: I0126 19:13:24.430657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spjxb" event={"ID":"e274a7dc-5150-47c2-a61a-9b1fefa7a241","Type":"ContainerDied","Data":"cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5"} Jan 26 19:13:25 crc kubenswrapper[4787]: I0126 19:13:25.440549 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spjxb" event={"ID":"e274a7dc-5150-47c2-a61a-9b1fefa7a241","Type":"ContainerStarted","Data":"6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586"} Jan 26 19:13:25 crc kubenswrapper[4787]: I0126 19:13:25.465919 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-spjxb" podStartSLOduration=2.967170464 podStartE2EDuration="5.465898529s" podCreationTimestamp="2026-01-26 19:13:20 +0000 UTC" firstStartedPulling="2026-01-26 19:13:22.414433469 +0000 UTC m=+5371.121569602" lastFinishedPulling="2026-01-26 19:13:24.913161534 +0000 UTC m=+5373.620297667" observedRunningTime="2026-01-26 19:13:25.460909238 +0000 UTC m=+5374.168045381" watchObservedRunningTime="2026-01-26 19:13:25.465898529 +0000 UTC m=+5374.173034662" Jan 26 19:13:25 crc kubenswrapper[4787]: I0126 19:13:25.599991 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf1cf16-fa6e-453d-8b40-55fa345eeb7a" path="/var/lib/kubelet/pods/0bf1cf16-fa6e-453d-8b40-55fa345eeb7a/volumes" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.530754 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xzphj"] Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.532309 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.537747 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xzphj"] Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.589121 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q798v\" (UniqueName: \"kubernetes.io/projected/60471611-d3d3-447e-8640-aec193ed80ba-kube-api-access-q798v\") pod \"neutron-db-create-xzphj\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.589249 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60471611-d3d3-447e-8640-aec193ed80ba-operator-scripts\") pod \"neutron-db-create-xzphj\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.628175 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-19ee-account-create-update-9v6ph"] Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.629499 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.631712 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.657092 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-19ee-account-create-update-9v6ph"] Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.690345 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q798v\" (UniqueName: \"kubernetes.io/projected/60471611-d3d3-447e-8640-aec193ed80ba-kube-api-access-q798v\") pod \"neutron-db-create-xzphj\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.690397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv929\" (UniqueName: \"kubernetes.io/projected/e490a536-2166-4223-813b-112656901c59-kube-api-access-fv929\") pod \"neutron-19ee-account-create-update-9v6ph\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.690461 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e490a536-2166-4223-813b-112656901c59-operator-scripts\") pod \"neutron-19ee-account-create-update-9v6ph\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.690529 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60471611-d3d3-447e-8640-aec193ed80ba-operator-scripts\") pod \"neutron-db-create-xzphj\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.691803 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60471611-d3d3-447e-8640-aec193ed80ba-operator-scripts\") pod \"neutron-db-create-xzphj\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.714210 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q798v\" (UniqueName: \"kubernetes.io/projected/60471611-d3d3-447e-8640-aec193ed80ba-kube-api-access-q798v\") pod \"neutron-db-create-xzphj\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.792289 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv929\" (UniqueName: \"kubernetes.io/projected/e490a536-2166-4223-813b-112656901c59-kube-api-access-fv929\") pod \"neutron-19ee-account-create-update-9v6ph\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.792358 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e490a536-2166-4223-813b-112656901c59-operator-scripts\") pod \"neutron-19ee-account-create-update-9v6ph\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.793175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e490a536-2166-4223-813b-112656901c59-operator-scripts\") pod \"neutron-19ee-account-create-update-9v6ph\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.828506 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv929\" (UniqueName: \"kubernetes.io/projected/e490a536-2166-4223-813b-112656901c59-kube-api-access-fv929\") pod \"neutron-19ee-account-create-update-9v6ph\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.884967 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:30 crc kubenswrapper[4787]: I0126 19:13:30.957335 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.053172 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.053565 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.119920 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.401282 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xzphj"] Jan 26 19:13:31 crc kubenswrapper[4787]: W0126 19:13:31.403760 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60471611_d3d3_447e_8640_aec193ed80ba.slice/crio-7d338f16499b35766f79a37486ecd16325669b3ed38f7d546dd6711a53e378bf WatchSource:0}: Error finding container 7d338f16499b35766f79a37486ecd16325669b3ed38f7d546dd6711a53e378bf: Status 404 returned error can't find the container with id 7d338f16499b35766f79a37486ecd16325669b3ed38f7d546dd6711a53e378bf Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.457336 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-19ee-account-create-update-9v6ph"] Jan 26 19:13:31 crc kubenswrapper[4787]: W0126 19:13:31.464560 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode490a536_2166_4223_813b_112656901c59.slice/crio-187571016e94cc25047418c805138cdbb3f2036eb7b2747f8afdfeda4612be9c WatchSource:0}: Error finding container 187571016e94cc25047418c805138cdbb3f2036eb7b2747f8afdfeda4612be9c: Status 404 returned error can't find the container with id 187571016e94cc25047418c805138cdbb3f2036eb7b2747f8afdfeda4612be9c Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.490107 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzphj" event={"ID":"60471611-d3d3-447e-8640-aec193ed80ba","Type":"ContainerStarted","Data":"7d338f16499b35766f79a37486ecd16325669b3ed38f7d546dd6711a53e378bf"} Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.490939 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-19ee-account-create-update-9v6ph" event={"ID":"e490a536-2166-4223-813b-112656901c59","Type":"ContainerStarted","Data":"187571016e94cc25047418c805138cdbb3f2036eb7b2747f8afdfeda4612be9c"} Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.548532 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:31 crc kubenswrapper[4787]: I0126 19:13:31.604573 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-spjxb"] Jan 26 19:13:32 crc kubenswrapper[4787]: I0126 19:13:32.499608 4787 generic.go:334] "Generic (PLEG): container finished" podID="60471611-d3d3-447e-8640-aec193ed80ba" containerID="99cb4b7a120c21050834e78e03f50ce1306e49158c06aa1c011ee79177ae1346" exitCode=0 Jan 26 19:13:32 crc kubenswrapper[4787]: I0126 19:13:32.499700 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzphj" event={"ID":"60471611-d3d3-447e-8640-aec193ed80ba","Type":"ContainerDied","Data":"99cb4b7a120c21050834e78e03f50ce1306e49158c06aa1c011ee79177ae1346"} Jan 26 19:13:32 crc kubenswrapper[4787]: I0126 19:13:32.502288 4787 generic.go:334] "Generic (PLEG): container finished" podID="e490a536-2166-4223-813b-112656901c59" containerID="57d8a76fcf146b78ef9a44196759c6f6fb6cb1daa3bf5d9d777639af5f66416a" exitCode=0 Jan 26 19:13:32 crc kubenswrapper[4787]: I0126 19:13:32.502571 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-19ee-account-create-update-9v6ph" event={"ID":"e490a536-2166-4223-813b-112656901c59","Type":"ContainerDied","Data":"57d8a76fcf146b78ef9a44196759c6f6fb6cb1daa3bf5d9d777639af5f66416a"} Jan 26 19:13:32 crc kubenswrapper[4787]: I0126 19:13:32.589846 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:13:32 crc kubenswrapper[4787]: E0126 19:13:32.590245 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:13:33 crc kubenswrapper[4787]: I0126 19:13:33.510381 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-spjxb" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="registry-server" containerID="cri-o://6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586" gracePeriod=2 Jan 26 19:13:33 crc kubenswrapper[4787]: I0126 19:13:33.907117 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.005626 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.011600 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.052511 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv929\" (UniqueName: \"kubernetes.io/projected/e490a536-2166-4223-813b-112656901c59-kube-api-access-fv929\") pod \"e490a536-2166-4223-813b-112656901c59\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.052573 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e490a536-2166-4223-813b-112656901c59-operator-scripts\") pod \"e490a536-2166-4223-813b-112656901c59\" (UID: \"e490a536-2166-4223-813b-112656901c59\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.053422 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e490a536-2166-4223-813b-112656901c59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e490a536-2166-4223-813b-112656901c59" (UID: "e490a536-2166-4223-813b-112656901c59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.057607 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e490a536-2166-4223-813b-112656901c59-kube-api-access-fv929" (OuterVolumeSpecName: "kube-api-access-fv929") pod "e490a536-2166-4223-813b-112656901c59" (UID: "e490a536-2166-4223-813b-112656901c59"). InnerVolumeSpecName "kube-api-access-fv929". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.153537 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-utilities\") pod \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.153594 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60471611-d3d3-447e-8640-aec193ed80ba-operator-scripts\") pod \"60471611-d3d3-447e-8640-aec193ed80ba\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.153656 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q798v\" (UniqueName: \"kubernetes.io/projected/60471611-d3d3-447e-8640-aec193ed80ba-kube-api-access-q798v\") pod \"60471611-d3d3-447e-8640-aec193ed80ba\" (UID: \"60471611-d3d3-447e-8640-aec193ed80ba\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.153698 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btt7j\" (UniqueName: \"kubernetes.io/projected/e274a7dc-5150-47c2-a61a-9b1fefa7a241-kube-api-access-btt7j\") pod \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.153748 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-catalog-content\") pod \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\" (UID: \"e274a7dc-5150-47c2-a61a-9b1fefa7a241\") " Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.154077 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv929\" (UniqueName: \"kubernetes.io/projected/e490a536-2166-4223-813b-112656901c59-kube-api-access-fv929\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.154095 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e490a536-2166-4223-813b-112656901c59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.154233 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60471611-d3d3-447e-8640-aec193ed80ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60471611-d3d3-447e-8640-aec193ed80ba" (UID: "60471611-d3d3-447e-8640-aec193ed80ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.154428 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-utilities" (OuterVolumeSpecName: "utilities") pod "e274a7dc-5150-47c2-a61a-9b1fefa7a241" (UID: "e274a7dc-5150-47c2-a61a-9b1fefa7a241"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.157044 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60471611-d3d3-447e-8640-aec193ed80ba-kube-api-access-q798v" (OuterVolumeSpecName: "kube-api-access-q798v") pod "60471611-d3d3-447e-8640-aec193ed80ba" (UID: "60471611-d3d3-447e-8640-aec193ed80ba"). InnerVolumeSpecName "kube-api-access-q798v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.158685 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e274a7dc-5150-47c2-a61a-9b1fefa7a241-kube-api-access-btt7j" (OuterVolumeSpecName: "kube-api-access-btt7j") pod "e274a7dc-5150-47c2-a61a-9b1fefa7a241" (UID: "e274a7dc-5150-47c2-a61a-9b1fefa7a241"). InnerVolumeSpecName "kube-api-access-btt7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.255507 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.255562 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60471611-d3d3-447e-8640-aec193ed80ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.255576 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q798v\" (UniqueName: \"kubernetes.io/projected/60471611-d3d3-447e-8640-aec193ed80ba-kube-api-access-q798v\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.255587 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btt7j\" (UniqueName: \"kubernetes.io/projected/e274a7dc-5150-47c2-a61a-9b1fefa7a241-kube-api-access-btt7j\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.287407 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e274a7dc-5150-47c2-a61a-9b1fefa7a241" (UID: "e274a7dc-5150-47c2-a61a-9b1fefa7a241"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.357461 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e274a7dc-5150-47c2-a61a-9b1fefa7a241-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.520846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-19ee-account-create-update-9v6ph" event={"ID":"e490a536-2166-4223-813b-112656901c59","Type":"ContainerDied","Data":"187571016e94cc25047418c805138cdbb3f2036eb7b2747f8afdfeda4612be9c"} Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.520898 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187571016e94cc25047418c805138cdbb3f2036eb7b2747f8afdfeda4612be9c" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.520938 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-19ee-account-create-update-9v6ph" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.523650 4787 generic.go:334] "Generic (PLEG): container finished" podID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerID="6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586" exitCode=0 Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.523719 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-spjxb" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.523718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spjxb" event={"ID":"e274a7dc-5150-47c2-a61a-9b1fefa7a241","Type":"ContainerDied","Data":"6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586"} Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.523845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-spjxb" event={"ID":"e274a7dc-5150-47c2-a61a-9b1fefa7a241","Type":"ContainerDied","Data":"e495765ce92456177221bf1dd4661522dfa7b6108846023735a77bfbe59cc224"} Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.523890 4787 scope.go:117] "RemoveContainer" containerID="6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.533804 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzphj" event={"ID":"60471611-d3d3-447e-8640-aec193ed80ba","Type":"ContainerDied","Data":"7d338f16499b35766f79a37486ecd16325669b3ed38f7d546dd6711a53e378bf"} Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.533845 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d338f16499b35766f79a37486ecd16325669b3ed38f7d546dd6711a53e378bf" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.533914 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzphj" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.566456 4787 scope.go:117] "RemoveContainer" containerID="cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.570829 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-spjxb"] Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.577575 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-spjxb"] Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.585041 4787 scope.go:117] "RemoveContainer" containerID="3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.602307 4787 scope.go:117] "RemoveContainer" containerID="6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586" Jan 26 19:13:34 crc kubenswrapper[4787]: E0126 19:13:34.603070 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586\": container with ID starting with 6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586 not found: ID does not exist" containerID="6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.603178 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586"} err="failed to get container status \"6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586\": rpc error: code = NotFound desc = could not find container \"6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586\": container with ID starting with 6138cada806976ba534b96917a3197517a0d11f6dbb1e75447cb68b90c42d586 not found: ID does not exist" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.603257 4787 scope.go:117] "RemoveContainer" containerID="cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5" Jan 26 19:13:34 crc kubenswrapper[4787]: E0126 19:13:34.603680 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5\": container with ID starting with cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5 not found: ID does not exist" containerID="cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.603725 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5"} err="failed to get container status \"cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5\": rpc error: code = NotFound desc = could not find container \"cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5\": container with ID starting with cf672b1a66533dcaff8eefe1672698c6866510a6c1cb46e670c4694d257358c5 not found: ID does not exist" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.603760 4787 scope.go:117] "RemoveContainer" containerID="3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682" Jan 26 19:13:34 crc kubenswrapper[4787]: E0126 19:13:34.604107 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682\": container with ID starting with 3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682 not found: ID does not exist" containerID="3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682" Jan 26 19:13:34 crc kubenswrapper[4787]: I0126 19:13:34.604168 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682"} err="failed to get container status \"3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682\": rpc error: code = NotFound desc = could not find container \"3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682\": container with ID starting with 3ba7caf0bcc651d8f89ce7a0d4422ab96cf9ab5c410b6dc3036d3b51c99b0682 not found: ID does not exist" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.600397 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" path="/var/lib/kubelet/pods/e274a7dc-5150-47c2-a61a-9b1fefa7a241/volumes" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.855184 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-j5x68"] Jan 26 19:13:35 crc kubenswrapper[4787]: E0126 19:13:35.855776 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e490a536-2166-4223-813b-112656901c59" containerName="mariadb-account-create-update" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.855793 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e490a536-2166-4223-813b-112656901c59" containerName="mariadb-account-create-update" Jan 26 19:13:35 crc kubenswrapper[4787]: E0126 19:13:35.855807 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="extract-utilities" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.855816 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="extract-utilities" Jan 26 19:13:35 crc kubenswrapper[4787]: E0126 19:13:35.855826 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60471611-d3d3-447e-8640-aec193ed80ba" containerName="mariadb-database-create" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.855832 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="60471611-d3d3-447e-8640-aec193ed80ba" containerName="mariadb-database-create" Jan 26 19:13:35 crc kubenswrapper[4787]: E0126 19:13:35.855845 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="extract-content" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.855851 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="extract-content" Jan 26 19:13:35 crc kubenswrapper[4787]: E0126 19:13:35.855862 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="registry-server" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.855868 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="registry-server" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.856038 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="60471611-d3d3-447e-8640-aec193ed80ba" containerName="mariadb-database-create" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.856064 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e490a536-2166-4223-813b-112656901c59" containerName="mariadb-account-create-update" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.856079 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e274a7dc-5150-47c2-a61a-9b1fefa7a241" containerName="registry-server" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.856645 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.858802 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wzbgr" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.859626 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.859724 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.865404 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j5x68"] Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.985431 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-combined-ca-bundle\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.985484 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbhd\" (UniqueName: \"kubernetes.io/projected/833c4748-1557-4cd3-aead-b8928712b5ec-kube-api-access-lxbhd\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:35 crc kubenswrapper[4787]: I0126 19:13:35.985562 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-config\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.086976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-combined-ca-bundle\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.087022 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbhd\" (UniqueName: \"kubernetes.io/projected/833c4748-1557-4cd3-aead-b8928712b5ec-kube-api-access-lxbhd\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.087079 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-config\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.093713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-combined-ca-bundle\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.103906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-config\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.104896 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbhd\" (UniqueName: \"kubernetes.io/projected/833c4748-1557-4cd3-aead-b8928712b5ec-kube-api-access-lxbhd\") pod \"neutron-db-sync-j5x68\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.173593 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:36 crc kubenswrapper[4787]: I0126 19:13:36.585523 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j5x68"] Jan 26 19:13:37 crc kubenswrapper[4787]: I0126 19:13:37.571341 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j5x68" event={"ID":"833c4748-1557-4cd3-aead-b8928712b5ec","Type":"ContainerStarted","Data":"153515b388104022ee1ba3aa94a7308b4278205c522c4a37e7a0ec74dc352feb"} Jan 26 19:13:37 crc kubenswrapper[4787]: I0126 19:13:37.571393 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j5x68" event={"ID":"833c4748-1557-4cd3-aead-b8928712b5ec","Type":"ContainerStarted","Data":"9c6bc46ae7188f29e589c336d011dc80f7dfb117f21b6fa04f053bf8db9cb28a"} Jan 26 19:13:37 crc kubenswrapper[4787]: I0126 19:13:37.599298 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-j5x68" podStartSLOduration=2.599280047 podStartE2EDuration="2.599280047s" podCreationTimestamp="2026-01-26 19:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:37.596646673 +0000 UTC m=+5386.303782826" watchObservedRunningTime="2026-01-26 19:13:37.599280047 +0000 UTC m=+5386.306416190" Jan 26 19:13:41 crc kubenswrapper[4787]: I0126 19:13:41.617213 4787 generic.go:334] "Generic (PLEG): container finished" podID="833c4748-1557-4cd3-aead-b8928712b5ec" containerID="153515b388104022ee1ba3aa94a7308b4278205c522c4a37e7a0ec74dc352feb" exitCode=0 Jan 26 19:13:41 crc kubenswrapper[4787]: I0126 19:13:41.617270 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j5x68" event={"ID":"833c4748-1557-4cd3-aead-b8928712b5ec","Type":"ContainerDied","Data":"153515b388104022ee1ba3aa94a7308b4278205c522c4a37e7a0ec74dc352feb"} Jan 26 19:13:42 crc kubenswrapper[4787]: I0126 19:13:42.969427 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.009505 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-config\") pod \"833c4748-1557-4cd3-aead-b8928712b5ec\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.009673 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxbhd\" (UniqueName: \"kubernetes.io/projected/833c4748-1557-4cd3-aead-b8928712b5ec-kube-api-access-lxbhd\") pod \"833c4748-1557-4cd3-aead-b8928712b5ec\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.009696 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-combined-ca-bundle\") pod \"833c4748-1557-4cd3-aead-b8928712b5ec\" (UID: \"833c4748-1557-4cd3-aead-b8928712b5ec\") " Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.016205 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833c4748-1557-4cd3-aead-b8928712b5ec-kube-api-access-lxbhd" (OuterVolumeSpecName: "kube-api-access-lxbhd") pod "833c4748-1557-4cd3-aead-b8928712b5ec" (UID: "833c4748-1557-4cd3-aead-b8928712b5ec"). InnerVolumeSpecName "kube-api-access-lxbhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.035868 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-config" (OuterVolumeSpecName: "config") pod "833c4748-1557-4cd3-aead-b8928712b5ec" (UID: "833c4748-1557-4cd3-aead-b8928712b5ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.038545 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "833c4748-1557-4cd3-aead-b8928712b5ec" (UID: "833c4748-1557-4cd3-aead-b8928712b5ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.111162 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.111199 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxbhd\" (UniqueName: \"kubernetes.io/projected/833c4748-1557-4cd3-aead-b8928712b5ec-kube-api-access-lxbhd\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.111211 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833c4748-1557-4cd3-aead-b8928712b5ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.636129 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j5x68" event={"ID":"833c4748-1557-4cd3-aead-b8928712b5ec","Type":"ContainerDied","Data":"9c6bc46ae7188f29e589c336d011dc80f7dfb117f21b6fa04f053bf8db9cb28a"} Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.636396 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6bc46ae7188f29e589c336d011dc80f7dfb117f21b6fa04f053bf8db9cb28a" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.636210 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j5x68" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.883868 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-584488c86c-rpvzc"] Jan 26 19:13:43 crc kubenswrapper[4787]: E0126 19:13:43.884241 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833c4748-1557-4cd3-aead-b8928712b5ec" containerName="neutron-db-sync" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.884260 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="833c4748-1557-4cd3-aead-b8928712b5ec" containerName="neutron-db-sync" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.884412 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="833c4748-1557-4cd3-aead-b8928712b5ec" containerName="neutron-db-sync" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.888161 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.907763 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-584488c86c-rpvzc"] Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.924918 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtk2r\" (UniqueName: \"kubernetes.io/projected/d1da43e0-33bb-4e27-a2cc-04503af4d164-kube-api-access-jtk2r\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.924974 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-sb\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.925216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-config\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.925338 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-dns-svc\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:43 crc kubenswrapper[4787]: I0126 19:13:43.925424 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-nb\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.027505 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7986b9755c-vhh5z"] Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.028263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtk2r\" (UniqueName: \"kubernetes.io/projected/d1da43e0-33bb-4e27-a2cc-04503af4d164-kube-api-access-jtk2r\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.028357 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-sb\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.028515 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-config\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.028581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-dns-svc\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.028626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-nb\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.028906 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.029567 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-sb\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.029663 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-config\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.029786 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-dns-svc\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.030165 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-nb\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.031401 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.031834 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wzbgr" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.032061 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.043936 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7986b9755c-vhh5z"] Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.064599 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtk2r\" (UniqueName: \"kubernetes.io/projected/d1da43e0-33bb-4e27-a2cc-04503af4d164-kube-api-access-jtk2r\") pod \"dnsmasq-dns-584488c86c-rpvzc\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.130340 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-httpd-config\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.130386 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-combined-ca-bundle\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.130430 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-config\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.130547 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnxd\" (UniqueName: \"kubernetes.io/projected/b41501cd-e353-4164-bd99-54a54d17c041-kube-api-access-llnxd\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.217740 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.231769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llnxd\" (UniqueName: \"kubernetes.io/projected/b41501cd-e353-4164-bd99-54a54d17c041-kube-api-access-llnxd\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.232133 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-httpd-config\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.232171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-combined-ca-bundle\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.232213 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-config\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.237703 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-httpd-config\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.238311 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-combined-ca-bundle\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.238993 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b41501cd-e353-4164-bd99-54a54d17c041-config\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.252427 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llnxd\" (UniqueName: \"kubernetes.io/projected/b41501cd-e353-4164-bd99-54a54d17c041-kube-api-access-llnxd\") pod \"neutron-7986b9755c-vhh5z\" (UID: \"b41501cd-e353-4164-bd99-54a54d17c041\") " pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.346304 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.589400 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:13:44 crc kubenswrapper[4787]: E0126 19:13:44.589648 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.712192 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-584488c86c-rpvzc"] Jan 26 19:13:44 crc kubenswrapper[4787]: I0126 19:13:44.895684 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7986b9755c-vhh5z"] Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.651835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7986b9755c-vhh5z" event={"ID":"b41501cd-e353-4164-bd99-54a54d17c041","Type":"ContainerStarted","Data":"cb3b31486e8e1a0b1686a8d6e2931063cd293d49006c0d3724ba2f191a623d21"} Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.652175 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7986b9755c-vhh5z" event={"ID":"b41501cd-e353-4164-bd99-54a54d17c041","Type":"ContainerStarted","Data":"f0730896c5deef707ec88e73cf0cf91aea1248a05b44d6f50c7f917ad1d1fcb7"} Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.652197 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.652212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7986b9755c-vhh5z" event={"ID":"b41501cd-e353-4164-bd99-54a54d17c041","Type":"ContainerStarted","Data":"1f94c02f33559b1feed6c19afb06448d9bcba0d30c15d8ba3a15808ee14a079d"} Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.653236 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerID="e4e78bb9a0394cd016fa7b3132222a11a58e84ccb7fb0f6d34ae361eae205acd" exitCode=0 Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.653270 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" event={"ID":"d1da43e0-33bb-4e27-a2cc-04503af4d164","Type":"ContainerDied","Data":"e4e78bb9a0394cd016fa7b3132222a11a58e84ccb7fb0f6d34ae361eae205acd"} Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.653295 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" event={"ID":"d1da43e0-33bb-4e27-a2cc-04503af4d164","Type":"ContainerStarted","Data":"54d57dce6888036df2cbd58aad06262eb837747c9db0995c991a469848cebe04"} Jan 26 19:13:45 crc kubenswrapper[4787]: I0126 19:13:45.676289 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7986b9755c-vhh5z" podStartSLOduration=1.676264706 podStartE2EDuration="1.676264706s" podCreationTimestamp="2026-01-26 19:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:45.674252668 +0000 UTC m=+5394.381388801" watchObservedRunningTime="2026-01-26 19:13:45.676264706 +0000 UTC m=+5394.383400839" Jan 26 19:13:46 crc kubenswrapper[4787]: I0126 19:13:46.665722 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" event={"ID":"d1da43e0-33bb-4e27-a2cc-04503af4d164","Type":"ContainerStarted","Data":"9c40ca33b19b6c3a3b431705cca0d83acc3b1b3f01f3cefe3020265fd537c5e2"} Jan 26 19:13:46 crc kubenswrapper[4787]: I0126 19:13:46.698991 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" podStartSLOduration=3.698965495 podStartE2EDuration="3.698965495s" podCreationTimestamp="2026-01-26 19:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:13:46.693538003 +0000 UTC m=+5395.400674186" watchObservedRunningTime="2026-01-26 19:13:46.698965495 +0000 UTC m=+5395.406101628" Jan 26 19:13:47 crc kubenswrapper[4787]: I0126 19:13:47.678518 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.219113 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.286699 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb55df559-8jjs5"] Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.286928 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerName="dnsmasq-dns" containerID="cri-o://484a85ea505224c5ad8e6639403454141916f7a38e154c095439738d06fb911f" gracePeriod=10 Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.734932 4787 generic.go:334] "Generic (PLEG): container finished" podID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerID="484a85ea505224c5ad8e6639403454141916f7a38e154c095439738d06fb911f" exitCode=0 Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.735054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" event={"ID":"7841c8e0-74ad-4638-b2ef-326a9bcd00fb","Type":"ContainerDied","Data":"484a85ea505224c5ad8e6639403454141916f7a38e154c095439738d06fb911f"} Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.735323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" event={"ID":"7841c8e0-74ad-4638-b2ef-326a9bcd00fb","Type":"ContainerDied","Data":"42cf7c2efc1198f6aadf8f430dc0ee5dea20fd87fd43e16ae851f66dce3c983f"} Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.735337 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42cf7c2efc1198f6aadf8f430dc0ee5dea20fd87fd43e16ae851f66dce3c983f" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.796410 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.828225 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-sb\") pod \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.828295 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-config\") pod \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.828344 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr4d9\" (UniqueName: \"kubernetes.io/projected/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-kube-api-access-sr4d9\") pod \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.828530 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-nb\") pod \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.828599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-dns-svc\") pod \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\" (UID: \"7841c8e0-74ad-4638-b2ef-326a9bcd00fb\") " Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.839759 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-kube-api-access-sr4d9" (OuterVolumeSpecName: "kube-api-access-sr4d9") pod "7841c8e0-74ad-4638-b2ef-326a9bcd00fb" (UID: "7841c8e0-74ad-4638-b2ef-326a9bcd00fb"). InnerVolumeSpecName "kube-api-access-sr4d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.878431 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-config" (OuterVolumeSpecName: "config") pod "7841c8e0-74ad-4638-b2ef-326a9bcd00fb" (UID: "7841c8e0-74ad-4638-b2ef-326a9bcd00fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.889285 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7841c8e0-74ad-4638-b2ef-326a9bcd00fb" (UID: "7841c8e0-74ad-4638-b2ef-326a9bcd00fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.899434 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7841c8e0-74ad-4638-b2ef-326a9bcd00fb" (UID: "7841c8e0-74ad-4638-b2ef-326a9bcd00fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.900262 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7841c8e0-74ad-4638-b2ef-326a9bcd00fb" (UID: "7841c8e0-74ad-4638-b2ef-326a9bcd00fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.930932 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.930987 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.931000 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.931012 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:54 crc kubenswrapper[4787]: I0126 19:13:54.931025 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr4d9\" (UniqueName: \"kubernetes.io/projected/7841c8e0-74ad-4638-b2ef-326a9bcd00fb-kube-api-access-sr4d9\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:55 crc kubenswrapper[4787]: I0126 19:13:55.742895 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb55df559-8jjs5" Jan 26 19:13:55 crc kubenswrapper[4787]: I0126 19:13:55.764281 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb55df559-8jjs5"] Jan 26 19:13:55 crc kubenswrapper[4787]: I0126 19:13:55.772415 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bb55df559-8jjs5"] Jan 26 19:13:57 crc kubenswrapper[4787]: I0126 19:13:57.607324 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" path="/var/lib/kubelet/pods/7841c8e0-74ad-4638-b2ef-326a9bcd00fb/volumes" Jan 26 19:13:58 crc kubenswrapper[4787]: I0126 19:13:58.426138 4787 scope.go:117] "RemoveContainer" containerID="96475a93261e7f0cef713f1e76bca10ff79a163748872a4cf5f08f7623454c36" Jan 26 19:13:59 crc kubenswrapper[4787]: I0126 19:13:59.589446 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:13:59 crc kubenswrapper[4787]: E0126 19:13:59.590185 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:14:11 crc kubenswrapper[4787]: I0126 19:14:11.595161 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:14:11 crc kubenswrapper[4787]: E0126 19:14:11.595788 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:14:14 crc kubenswrapper[4787]: I0126 19:14:14.362565 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7986b9755c-vhh5z" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.807774 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k2krx"] Jan 26 19:14:20 crc kubenswrapper[4787]: E0126 19:14:20.808727 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerName="init" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.808742 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerName="init" Jan 26 19:14:20 crc kubenswrapper[4787]: E0126 19:14:20.808766 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerName="dnsmasq-dns" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.808774 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerName="dnsmasq-dns" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.809005 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7841c8e0-74ad-4638-b2ef-326a9bcd00fb" containerName="dnsmasq-dns" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.810500 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.828771 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2krx"] Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.873529 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-catalog-content\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.873602 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqp6n\" (UniqueName: \"kubernetes.io/projected/dc9c14de-882c-4e5a-b21a-0a73a6363275-kube-api-access-wqp6n\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.873653 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-utilities\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.974666 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-catalog-content\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.974735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqp6n\" (UniqueName: \"kubernetes.io/projected/dc9c14de-882c-4e5a-b21a-0a73a6363275-kube-api-access-wqp6n\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.974783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-utilities\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.975224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-utilities\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:20 crc kubenswrapper[4787]: I0126 19:14:20.975271 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-catalog-content\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.011074 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqp6n\" (UniqueName: \"kubernetes.io/projected/dc9c14de-882c-4e5a-b21a-0a73a6363275-kube-api-access-wqp6n\") pod \"redhat-marketplace-k2krx\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.148354 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.237624 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2d7ds"] Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.238919 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.255151 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2d7ds"] Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.280023 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d85a6954-2c85-4237-a63e-2d2391fdc1dc-operator-scripts\") pod \"glance-db-create-2d7ds\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.280164 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqs5m\" (UniqueName: \"kubernetes.io/projected/d85a6954-2c85-4237-a63e-2d2391fdc1dc-kube-api-access-dqs5m\") pod \"glance-db-create-2d7ds\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.342510 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0a26-account-create-update-ngdcz"] Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.344113 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.346341 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.360805 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0a26-account-create-update-ngdcz"] Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.382484 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnklw\" (UniqueName: \"kubernetes.io/projected/98757f95-4a1d-4854-a2ce-39c97784b153-kube-api-access-vnklw\") pod \"glance-0a26-account-create-update-ngdcz\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.382624 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d85a6954-2c85-4237-a63e-2d2391fdc1dc-operator-scripts\") pod \"glance-db-create-2d7ds\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.382710 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98757f95-4a1d-4854-a2ce-39c97784b153-operator-scripts\") pod \"glance-0a26-account-create-update-ngdcz\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.382755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqs5m\" (UniqueName: \"kubernetes.io/projected/d85a6954-2c85-4237-a63e-2d2391fdc1dc-kube-api-access-dqs5m\") pod \"glance-db-create-2d7ds\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.383436 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d85a6954-2c85-4237-a63e-2d2391fdc1dc-operator-scripts\") pod \"glance-db-create-2d7ds\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.405243 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqs5m\" (UniqueName: \"kubernetes.io/projected/d85a6954-2c85-4237-a63e-2d2391fdc1dc-kube-api-access-dqs5m\") pod \"glance-db-create-2d7ds\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.484043 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98757f95-4a1d-4854-a2ce-39c97784b153-operator-scripts\") pod \"glance-0a26-account-create-update-ngdcz\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.484181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnklw\" (UniqueName: \"kubernetes.io/projected/98757f95-4a1d-4854-a2ce-39c97784b153-kube-api-access-vnklw\") pod \"glance-0a26-account-create-update-ngdcz\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.485043 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98757f95-4a1d-4854-a2ce-39c97784b153-operator-scripts\") pod \"glance-0a26-account-create-update-ngdcz\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.505591 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnklw\" (UniqueName: \"kubernetes.io/projected/98757f95-4a1d-4854-a2ce-39c97784b153-kube-api-access-vnklw\") pod \"glance-0a26-account-create-update-ngdcz\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.571303 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.663682 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.754309 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2krx"] Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.972985 4787 generic.go:334] "Generic (PLEG): container finished" podID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerID="cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c" exitCode=0 Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.973145 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2krx" event={"ID":"dc9c14de-882c-4e5a-b21a-0a73a6363275","Type":"ContainerDied","Data":"cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c"} Jan 26 19:14:21 crc kubenswrapper[4787]: I0126 19:14:21.974788 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2krx" event={"ID":"dc9c14de-882c-4e5a-b21a-0a73a6363275","Type":"ContainerStarted","Data":"fabc0dbc8e3c347d9ac628dda942df7b68d2ed92d4d50f161bef839f26111980"} Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.016931 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2d7ds"] Jan 26 19:14:22 crc kubenswrapper[4787]: W0126 19:14:22.027305 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd85a6954_2c85_4237_a63e_2d2391fdc1dc.slice/crio-58acd0772f8c035ba97eb6a10c46e06c834677a4c561f949017be82f7fc5c59a WatchSource:0}: Error finding container 58acd0772f8c035ba97eb6a10c46e06c834677a4c561f949017be82f7fc5c59a: Status 404 returned error can't find the container with id 58acd0772f8c035ba97eb6a10c46e06c834677a4c561f949017be82f7fc5c59a Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.128082 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0a26-account-create-update-ngdcz"] Jan 26 19:14:22 crc kubenswrapper[4787]: W0126 19:14:22.132696 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98757f95_4a1d_4854_a2ce_39c97784b153.slice/crio-d8adb6e72448e0fb9fbb7e35a2e8dbc2caffb4881308057009ffcc7a8ad4bb62 WatchSource:0}: Error finding container d8adb6e72448e0fb9fbb7e35a2e8dbc2caffb4881308057009ffcc7a8ad4bb62: Status 404 returned error can't find the container with id d8adb6e72448e0fb9fbb7e35a2e8dbc2caffb4881308057009ffcc7a8ad4bb62 Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.589236 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:14:22 crc kubenswrapper[4787]: E0126 19:14:22.589738 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.986055 4787 generic.go:334] "Generic (PLEG): container finished" podID="d85a6954-2c85-4237-a63e-2d2391fdc1dc" containerID="c69d940a2488ee8d50357aeb98a2c5747bddc863f894bd0a4274e8ab7012f736" exitCode=0 Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.986106 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2d7ds" event={"ID":"d85a6954-2c85-4237-a63e-2d2391fdc1dc","Type":"ContainerDied","Data":"c69d940a2488ee8d50357aeb98a2c5747bddc863f894bd0a4274e8ab7012f736"} Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.986487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2d7ds" event={"ID":"d85a6954-2c85-4237-a63e-2d2391fdc1dc","Type":"ContainerStarted","Data":"58acd0772f8c035ba97eb6a10c46e06c834677a4c561f949017be82f7fc5c59a"} Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.991535 4787 generic.go:334] "Generic (PLEG): container finished" podID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerID="6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3" exitCode=0 Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.991567 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2krx" event={"ID":"dc9c14de-882c-4e5a-b21a-0a73a6363275","Type":"ContainerDied","Data":"6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3"} Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.993979 4787 generic.go:334] "Generic (PLEG): container finished" podID="98757f95-4a1d-4854-a2ce-39c97784b153" containerID="aa828fd9b8fc393d71b24935b9ddc74f127e03451928bc807aec4eb6644f7641" exitCode=0 Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.994016 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a26-account-create-update-ngdcz" event={"ID":"98757f95-4a1d-4854-a2ce-39c97784b153","Type":"ContainerDied","Data":"aa828fd9b8fc393d71b24935b9ddc74f127e03451928bc807aec4eb6644f7641"} Jan 26 19:14:22 crc kubenswrapper[4787]: I0126 19:14:22.994040 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a26-account-create-update-ngdcz" event={"ID":"98757f95-4a1d-4854-a2ce-39c97784b153","Type":"ContainerStarted","Data":"d8adb6e72448e0fb9fbb7e35a2e8dbc2caffb4881308057009ffcc7a8ad4bb62"} Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.006464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2krx" event={"ID":"dc9c14de-882c-4e5a-b21a-0a73a6363275","Type":"ContainerStarted","Data":"f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef"} Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.044231 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k2krx" podStartSLOduration=2.564267685 podStartE2EDuration="4.044210467s" podCreationTimestamp="2026-01-26 19:14:20 +0000 UTC" firstStartedPulling="2026-01-26 19:14:21.976024488 +0000 UTC m=+5430.683160621" lastFinishedPulling="2026-01-26 19:14:23.45596727 +0000 UTC m=+5432.163103403" observedRunningTime="2026-01-26 19:14:24.027115272 +0000 UTC m=+5432.734251445" watchObservedRunningTime="2026-01-26 19:14:24.044210467 +0000 UTC m=+5432.751346610" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.450538 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.457887 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.636581 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98757f95-4a1d-4854-a2ce-39c97784b153-operator-scripts\") pod \"98757f95-4a1d-4854-a2ce-39c97784b153\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.636993 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnklw\" (UniqueName: \"kubernetes.io/projected/98757f95-4a1d-4854-a2ce-39c97784b153-kube-api-access-vnklw\") pod \"98757f95-4a1d-4854-a2ce-39c97784b153\" (UID: \"98757f95-4a1d-4854-a2ce-39c97784b153\") " Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.637168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqs5m\" (UniqueName: \"kubernetes.io/projected/d85a6954-2c85-4237-a63e-2d2391fdc1dc-kube-api-access-dqs5m\") pod \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.637220 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d85a6954-2c85-4237-a63e-2d2391fdc1dc-operator-scripts\") pod \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\" (UID: \"d85a6954-2c85-4237-a63e-2d2391fdc1dc\") " Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.637799 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98757f95-4a1d-4854-a2ce-39c97784b153-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98757f95-4a1d-4854-a2ce-39c97784b153" (UID: "98757f95-4a1d-4854-a2ce-39c97784b153"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.637855 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85a6954-2c85-4237-a63e-2d2391fdc1dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d85a6954-2c85-4237-a63e-2d2391fdc1dc" (UID: "d85a6954-2c85-4237-a63e-2d2391fdc1dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.643512 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98757f95-4a1d-4854-a2ce-39c97784b153-kube-api-access-vnklw" (OuterVolumeSpecName: "kube-api-access-vnklw") pod "98757f95-4a1d-4854-a2ce-39c97784b153" (UID: "98757f95-4a1d-4854-a2ce-39c97784b153"). InnerVolumeSpecName "kube-api-access-vnklw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.651080 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85a6954-2c85-4237-a63e-2d2391fdc1dc-kube-api-access-dqs5m" (OuterVolumeSpecName: "kube-api-access-dqs5m") pod "d85a6954-2c85-4237-a63e-2d2391fdc1dc" (UID: "d85a6954-2c85-4237-a63e-2d2391fdc1dc"). InnerVolumeSpecName "kube-api-access-dqs5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.738867 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnklw\" (UniqueName: \"kubernetes.io/projected/98757f95-4a1d-4854-a2ce-39c97784b153-kube-api-access-vnklw\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.739881 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqs5m\" (UniqueName: \"kubernetes.io/projected/d85a6954-2c85-4237-a63e-2d2391fdc1dc-kube-api-access-dqs5m\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.740024 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d85a6954-2c85-4237-a63e-2d2391fdc1dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:24 crc kubenswrapper[4787]: I0126 19:14:24.740094 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98757f95-4a1d-4854-a2ce-39c97784b153-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:25 crc kubenswrapper[4787]: I0126 19:14:25.018597 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2d7ds" Jan 26 19:14:25 crc kubenswrapper[4787]: I0126 19:14:25.018613 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2d7ds" event={"ID":"d85a6954-2c85-4237-a63e-2d2391fdc1dc","Type":"ContainerDied","Data":"58acd0772f8c035ba97eb6a10c46e06c834677a4c561f949017be82f7fc5c59a"} Jan 26 19:14:25 crc kubenswrapper[4787]: I0126 19:14:25.018654 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58acd0772f8c035ba97eb6a10c46e06c834677a4c561f949017be82f7fc5c59a" Jan 26 19:14:25 crc kubenswrapper[4787]: I0126 19:14:25.020938 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a26-account-create-update-ngdcz" event={"ID":"98757f95-4a1d-4854-a2ce-39c97784b153","Type":"ContainerDied","Data":"d8adb6e72448e0fb9fbb7e35a2e8dbc2caffb4881308057009ffcc7a8ad4bb62"} Jan 26 19:14:25 crc kubenswrapper[4787]: I0126 19:14:25.020981 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8adb6e72448e0fb9fbb7e35a2e8dbc2caffb4881308057009ffcc7a8ad4bb62" Jan 26 19:14:25 crc kubenswrapper[4787]: I0126 19:14:25.021040 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a26-account-create-update-ngdcz" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.633847 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-vgn54"] Jan 26 19:14:26 crc kubenswrapper[4787]: E0126 19:14:26.634894 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85a6954-2c85-4237-a63e-2d2391fdc1dc" containerName="mariadb-database-create" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.635002 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85a6954-2c85-4237-a63e-2d2391fdc1dc" containerName="mariadb-database-create" Jan 26 19:14:26 crc kubenswrapper[4787]: E0126 19:14:26.635072 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98757f95-4a1d-4854-a2ce-39c97784b153" containerName="mariadb-account-create-update" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.635131 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="98757f95-4a1d-4854-a2ce-39c97784b153" containerName="mariadb-account-create-update" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.635795 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85a6954-2c85-4237-a63e-2d2391fdc1dc" containerName="mariadb-database-create" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.635872 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="98757f95-4a1d-4854-a2ce-39c97784b153" containerName="mariadb-account-create-update" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.636463 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.638411 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.639064 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tmc8m" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.652472 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vgn54"] Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.774070 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmjj8\" (UniqueName: \"kubernetes.io/projected/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-kube-api-access-zmjj8\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.774192 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-db-sync-config-data\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.774218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-combined-ca-bundle\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.774378 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-config-data\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.875594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-db-sync-config-data\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.875673 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-combined-ca-bundle\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.875706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-config-data\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.876761 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmjj8\" (UniqueName: \"kubernetes.io/projected/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-kube-api-access-zmjj8\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.880605 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-combined-ca-bundle\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.880607 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-db-sync-config-data\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.880843 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-config-data\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.904766 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmjj8\" (UniqueName: \"kubernetes.io/projected/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-kube-api-access-zmjj8\") pod \"glance-db-sync-vgn54\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:26 crc kubenswrapper[4787]: I0126 19:14:26.953389 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:27 crc kubenswrapper[4787]: I0126 19:14:27.497515 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vgn54"] Jan 26 19:14:28 crc kubenswrapper[4787]: I0126 19:14:28.044731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vgn54" event={"ID":"faa8ab24-bc18-4568-ac88-2a3d3682d0f5","Type":"ContainerStarted","Data":"267a86d963b59e62c7be52b577a7665dc8901060ec997777473a35ff9d9049b7"} Jan 26 19:14:29 crc kubenswrapper[4787]: I0126 19:14:29.054284 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vgn54" event={"ID":"faa8ab24-bc18-4568-ac88-2a3d3682d0f5","Type":"ContainerStarted","Data":"5609e0dd66bf317063044b328100b89b3775947bea9a25431ea46cf1711f99b2"} Jan 26 19:14:29 crc kubenswrapper[4787]: I0126 19:14:29.085857 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-vgn54" podStartSLOduration=3.08582306 podStartE2EDuration="3.08582306s" podCreationTimestamp="2026-01-26 19:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:14:29.067375232 +0000 UTC m=+5437.774511365" watchObservedRunningTime="2026-01-26 19:14:29.08582306 +0000 UTC m=+5437.792959193" Jan 26 19:14:31 crc kubenswrapper[4787]: I0126 19:14:31.148890 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:31 crc kubenswrapper[4787]: I0126 19:14:31.148990 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:31 crc kubenswrapper[4787]: I0126 19:14:31.205146 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:32 crc kubenswrapper[4787]: I0126 19:14:32.085027 4787 generic.go:334] "Generic (PLEG): container finished" podID="faa8ab24-bc18-4568-ac88-2a3d3682d0f5" containerID="5609e0dd66bf317063044b328100b89b3775947bea9a25431ea46cf1711f99b2" exitCode=0 Jan 26 19:14:32 crc kubenswrapper[4787]: I0126 19:14:32.085087 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vgn54" event={"ID":"faa8ab24-bc18-4568-ac88-2a3d3682d0f5","Type":"ContainerDied","Data":"5609e0dd66bf317063044b328100b89b3775947bea9a25431ea46cf1711f99b2"} Jan 26 19:14:32 crc kubenswrapper[4787]: I0126 19:14:32.144671 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:32 crc kubenswrapper[4787]: I0126 19:14:32.203440 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2krx"] Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.572419 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.691256 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmjj8\" (UniqueName: \"kubernetes.io/projected/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-kube-api-access-zmjj8\") pod \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.691413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-db-sync-config-data\") pod \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.691484 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-combined-ca-bundle\") pod \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.691512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-config-data\") pod \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\" (UID: \"faa8ab24-bc18-4568-ac88-2a3d3682d0f5\") " Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.697825 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "faa8ab24-bc18-4568-ac88-2a3d3682d0f5" (UID: "faa8ab24-bc18-4568-ac88-2a3d3682d0f5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.698614 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-kube-api-access-zmjj8" (OuterVolumeSpecName: "kube-api-access-zmjj8") pod "faa8ab24-bc18-4568-ac88-2a3d3682d0f5" (UID: "faa8ab24-bc18-4568-ac88-2a3d3682d0f5"). InnerVolumeSpecName "kube-api-access-zmjj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.716263 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faa8ab24-bc18-4568-ac88-2a3d3682d0f5" (UID: "faa8ab24-bc18-4568-ac88-2a3d3682d0f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.740808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-config-data" (OuterVolumeSpecName: "config-data") pod "faa8ab24-bc18-4568-ac88-2a3d3682d0f5" (UID: "faa8ab24-bc18-4568-ac88-2a3d3682d0f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.793610 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.793635 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.793645 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:33 crc kubenswrapper[4787]: I0126 19:14:33.793654 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmjj8\" (UniqueName: \"kubernetes.io/projected/faa8ab24-bc18-4568-ac88-2a3d3682d0f5-kube-api-access-zmjj8\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.132092 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vgn54" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.132559 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vgn54" event={"ID":"faa8ab24-bc18-4568-ac88-2a3d3682d0f5","Type":"ContainerDied","Data":"267a86d963b59e62c7be52b577a7665dc8901060ec997777473a35ff9d9049b7"} Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.132596 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267a86d963b59e62c7be52b577a7665dc8901060ec997777473a35ff9d9049b7" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.132187 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k2krx" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="registry-server" containerID="cri-o://f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef" gracePeriod=2 Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.544511 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576b69787-qtd6d"] Jan 26 19:14:34 crc kubenswrapper[4787]: E0126 19:14:34.544868 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa8ab24-bc18-4568-ac88-2a3d3682d0f5" containerName="glance-db-sync" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.544884 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa8ab24-bc18-4568-ac88-2a3d3682d0f5" containerName="glance-db-sync" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.545063 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa8ab24-bc18-4568-ac88-2a3d3682d0f5" containerName="glance-db-sync" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.545913 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.559276 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576b69787-qtd6d"] Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.568164 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.576494 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.578540 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.578789 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tmc8m" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.578996 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.588642 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.589434 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:14:34 crc kubenswrapper[4787]: E0126 19:14:34.589642 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.589892 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.657636 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.659420 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.664454 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.672413 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.713991 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.724346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-logs\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.724636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.724800 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.724909 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-ceph\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.725065 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.725427 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.725657 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-config\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.725789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-dns-svc\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.725890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4pg\" (UniqueName: \"kubernetes.io/projected/2cf1851f-f362-457b-a929-bad659867628-kube-api-access-gc4pg\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.725996 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-nb\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.726151 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-sb\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.726280 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6kpf\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-kube-api-access-h6kpf\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.828035 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqp6n\" (UniqueName: \"kubernetes.io/projected/dc9c14de-882c-4e5a-b21a-0a73a6363275-kube-api-access-wqp6n\") pod \"dc9c14de-882c-4e5a-b21a-0a73a6363275\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.828481 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-catalog-content\") pod \"dc9c14de-882c-4e5a-b21a-0a73a6363275\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.828652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-utilities\") pod \"dc9c14de-882c-4e5a-b21a-0a73a6363275\" (UID: \"dc9c14de-882c-4e5a-b21a-0a73a6363275\") " Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.828979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829017 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-ceph\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829053 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829078 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829119 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829160 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-config\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829186 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-dns-svc\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd2tm\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-kube-api-access-hd2tm\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829237 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4pg\" (UniqueName: \"kubernetes.io/projected/2cf1851f-f362-457b-a929-bad659867628-kube-api-access-gc4pg\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829261 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-nb\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829297 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-sb\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829348 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829377 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6kpf\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-kube-api-access-h6kpf\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829439 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-logs\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.829619 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.830897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-nb\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.831209 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-sb\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.831802 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-logs\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.832317 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-dns-svc\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.832378 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.832714 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9c14de-882c-4e5a-b21a-0a73a6363275-kube-api-access-wqp6n" (OuterVolumeSpecName: "kube-api-access-wqp6n") pod "dc9c14de-882c-4e5a-b21a-0a73a6363275" (UID: "dc9c14de-882c-4e5a-b21a-0a73a6363275"). InnerVolumeSpecName "kube-api-access-wqp6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.832727 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-utilities" (OuterVolumeSpecName: "utilities") pod "dc9c14de-882c-4e5a-b21a-0a73a6363275" (UID: "dc9c14de-882c-4e5a-b21a-0a73a6363275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.836398 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-config\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.844143 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.845441 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-ceph\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.847332 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6kpf\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-kube-api-access-h6kpf\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.848755 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.851735 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4pg\" (UniqueName: \"kubernetes.io/projected/2cf1851f-f362-457b-a929-bad659867628-kube-api-access-gc4pg\") pod \"dnsmasq-dns-5576b69787-qtd6d\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.865136 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc9c14de-882c-4e5a-b21a-0a73a6363275" (UID: "dc9c14de-882c-4e5a-b21a-0a73a6363275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.870497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.877654 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.932229 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.932693 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd2tm\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-kube-api-access-hd2tm\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.932916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.933061 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.933166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.933272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.933561 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.933799 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.933892 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqp6n\" (UniqueName: \"kubernetes.io/projected/dc9c14de-882c-4e5a-b21a-0a73a6363275-kube-api-access-wqp6n\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.934005 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9c14de-882c-4e5a-b21a-0a73a6363275-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.934632 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.934874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.939055 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.940881 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.942225 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.946727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:34 crc kubenswrapper[4787]: I0126 19:14:34.957016 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd2tm\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-kube-api-access-hd2tm\") pod \"glance-default-internal-api-0\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.005274 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.028842 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.166353 4787 generic.go:334] "Generic (PLEG): container finished" podID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerID="f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef" exitCode=0 Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.166396 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2krx" event={"ID":"dc9c14de-882c-4e5a-b21a-0a73a6363275","Type":"ContainerDied","Data":"f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef"} Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.166423 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2krx" event={"ID":"dc9c14de-882c-4e5a-b21a-0a73a6363275","Type":"ContainerDied","Data":"fabc0dbc8e3c347d9ac628dda942df7b68d2ed92d4d50f161bef839f26111980"} Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.166440 4787 scope.go:117] "RemoveContainer" containerID="f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.166443 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2krx" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.199069 4787 scope.go:117] "RemoveContainer" containerID="6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.233617 4787 scope.go:117] "RemoveContainer" containerID="cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.239529 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2krx"] Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.251005 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2krx"] Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.273678 4787 scope.go:117] "RemoveContainer" containerID="f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef" Jan 26 19:14:35 crc kubenswrapper[4787]: E0126 19:14:35.275112 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef\": container with ID starting with f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef not found: ID does not exist" containerID="f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.275144 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef"} err="failed to get container status \"f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef\": rpc error: code = NotFound desc = could not find container \"f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef\": container with ID starting with f7fea90866b8e78f8ea67eea34adc60af5c2f4a30e3f72c554dee8f7909e98ef not found: ID does not exist" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.275184 4787 scope.go:117] "RemoveContainer" containerID="6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3" Jan 26 19:14:35 crc kubenswrapper[4787]: E0126 19:14:35.275614 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3\": container with ID starting with 6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3 not found: ID does not exist" containerID="6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.275658 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3"} err="failed to get container status \"6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3\": rpc error: code = NotFound desc = could not find container \"6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3\": container with ID starting with 6b64f27824042134bba265373bee5c08239783a83430b077954bc6fb72c938a3 not found: ID does not exist" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.275708 4787 scope.go:117] "RemoveContainer" containerID="cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c" Jan 26 19:14:35 crc kubenswrapper[4787]: E0126 19:14:35.276224 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c\": container with ID starting with cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c not found: ID does not exist" containerID="cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.276255 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c"} err="failed to get container status \"cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c\": rpc error: code = NotFound desc = could not find container \"cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c\": container with ID starting with cc7868448e7f1d23e41bad0ad3da30211cbd66e01a79c8116fa974ac8c5c420c not found: ID does not exist" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.353414 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576b69787-qtd6d"] Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.608240 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" path="/var/lib/kubelet/pods/dc9c14de-882c-4e5a-b21a-0a73a6363275/volumes" Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.609345 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.609863 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:35 crc kubenswrapper[4787]: I0126 19:14:35.689262 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:36 crc kubenswrapper[4787]: I0126 19:14:36.176247 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2d0ffc8-40bf-497d-8ac9-34798379814a","Type":"ContainerStarted","Data":"ec71785d879daa26f056129f19b6ad86e62d97958555d5bb5ca29c69c946db9b"} Jan 26 19:14:36 crc kubenswrapper[4787]: I0126 19:14:36.177651 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3","Type":"ContainerStarted","Data":"b4973fd87fb40a1d7b5ecb8b4c3e3aabbfd65bff2d9eb79c809553899381a2cc"} Jan 26 19:14:36 crc kubenswrapper[4787]: I0126 19:14:36.184958 4787 generic.go:334] "Generic (PLEG): container finished" podID="2cf1851f-f362-457b-a929-bad659867628" containerID="12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da" exitCode=0 Jan 26 19:14:36 crc kubenswrapper[4787]: I0126 19:14:36.185008 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" event={"ID":"2cf1851f-f362-457b-a929-bad659867628","Type":"ContainerDied","Data":"12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da"} Jan 26 19:14:36 crc kubenswrapper[4787]: I0126 19:14:36.185037 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" event={"ID":"2cf1851f-f362-457b-a929-bad659867628","Type":"ContainerStarted","Data":"005a5cf85a891311b7429d2ee81df20ceee1c3e22dd1e240cbfae27605856902"} Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.194889 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" event={"ID":"2cf1851f-f362-457b-a929-bad659867628","Type":"ContainerStarted","Data":"566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd"} Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.195279 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.205260 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-log" containerID="cri-o://648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61" gracePeriod=30 Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.205361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2d0ffc8-40bf-497d-8ac9-34798379814a","Type":"ContainerStarted","Data":"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d"} Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.205381 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2d0ffc8-40bf-497d-8ac9-34798379814a","Type":"ContainerStarted","Data":"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61"} Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.205424 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-httpd" containerID="cri-o://a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d" gracePeriod=30 Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.213069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3","Type":"ContainerStarted","Data":"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd"} Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.213135 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3","Type":"ContainerStarted","Data":"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d"} Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.247438 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" podStartSLOduration=3.247416137 podStartE2EDuration="3.247416137s" podCreationTimestamp="2026-01-26 19:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:14:37.241486134 +0000 UTC m=+5445.948622267" watchObservedRunningTime="2026-01-26 19:14:37.247416137 +0000 UTC m=+5445.954552280" Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.288599 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.288575858 podStartE2EDuration="3.288575858s" podCreationTimestamp="2026-01-26 19:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:14:37.286239531 +0000 UTC m=+5445.993375664" watchObservedRunningTime="2026-01-26 19:14:37.288575858 +0000 UTC m=+5445.995711991" Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.337216 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.33719489 podStartE2EDuration="3.33719489s" podCreationTimestamp="2026-01-26 19:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:14:37.325232739 +0000 UTC m=+5446.032368872" watchObservedRunningTime="2026-01-26 19:14:37.33719489 +0000 UTC m=+5446.044331033" Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.831499 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:37 crc kubenswrapper[4787]: I0126 19:14:37.967501 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.116872 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-scripts\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.116939 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-logs\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117020 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6kpf\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-kube-api-access-h6kpf\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117062 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-httpd-run\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117140 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-config-data\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117188 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-combined-ca-bundle\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117235 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-ceph\") pod \"f2d0ffc8-40bf-497d-8ac9-34798379814a\" (UID: \"f2d0ffc8-40bf-497d-8ac9-34798379814a\") " Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117425 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117683 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.117928 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-logs" (OuterVolumeSpecName: "logs") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.123130 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-ceph" (OuterVolumeSpecName: "ceph") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.124501 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-scripts" (OuterVolumeSpecName: "scripts") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.125558 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-kube-api-access-h6kpf" (OuterVolumeSpecName: "kube-api-access-h6kpf") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "kube-api-access-h6kpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.144280 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.186156 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-config-data" (OuterVolumeSpecName: "config-data") pod "f2d0ffc8-40bf-497d-8ac9-34798379814a" (UID: "f2d0ffc8-40bf-497d-8ac9-34798379814a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.223006 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2d0ffc8-40bf-497d-8ac9-34798379814a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.224139 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6kpf\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-kube-api-access-h6kpf\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.224279 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.224369 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.224451 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2d0ffc8-40bf-497d-8ac9-34798379814a-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.224544 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d0ffc8-40bf-497d-8ac9-34798379814a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232419 4787 generic.go:334] "Generic (PLEG): container finished" podID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerID="a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d" exitCode=0 Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232456 4787 generic.go:334] "Generic (PLEG): container finished" podID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerID="648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61" exitCode=143 Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232560 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2d0ffc8-40bf-497d-8ac9-34798379814a","Type":"ContainerDied","Data":"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d"} Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232612 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2d0ffc8-40bf-497d-8ac9-34798379814a","Type":"ContainerDied","Data":"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61"} Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2d0ffc8-40bf-497d-8ac9-34798379814a","Type":"ContainerDied","Data":"ec71785d879daa26f056129f19b6ad86e62d97958555d5bb5ca29c69c946db9b"} Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232650 4787 scope.go:117] "RemoveContainer" containerID="a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.232822 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.261754 4787 scope.go:117] "RemoveContainer" containerID="648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.280839 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.287689 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.298104 4787 scope.go:117] "RemoveContainer" containerID="a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d" Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.305400 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d\": container with ID starting with a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d not found: ID does not exist" containerID="a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305445 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d"} err="failed to get container status \"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d\": rpc error: code = NotFound desc = could not find container \"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d\": container with ID starting with a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d not found: ID does not exist" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305470 4787 scope.go:117] "RemoveContainer" containerID="648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305559 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.305881 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="extract-utilities" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305897 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="extract-utilities" Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.305905 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-httpd" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305911 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-httpd" Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.305925 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-log" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305931 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-log" Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.305940 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="extract-content" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305961 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="extract-content" Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.305972 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="registry-server" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.305978 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="registry-server" Jan 26 19:14:38 crc kubenswrapper[4787]: E0126 19:14:38.306080 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61\": container with ID starting with 648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61 not found: ID does not exist" containerID="648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306126 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-httpd" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306119 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61"} err="failed to get container status \"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61\": rpc error: code = NotFound desc = could not find container \"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61\": container with ID starting with 648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61 not found: ID does not exist" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306143 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9c14de-882c-4e5a-b21a-0a73a6363275" containerName="registry-server" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306146 4787 scope.go:117] "RemoveContainer" containerID="a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306161 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" containerName="glance-log" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306984 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.306980 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d"} err="failed to get container status \"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d\": rpc error: code = NotFound desc = could not find container \"a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d\": container with ID starting with a9f422359554c16826c605603dfc22d488860a25621555ebecd779615651bb6d not found: ID does not exist" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.307469 4787 scope.go:117] "RemoveContainer" containerID="648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.308323 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61"} err="failed to get container status \"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61\": rpc error: code = NotFound desc = could not find container \"648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61\": container with ID starting with 648fb4160640c42828dd85e04cd648d661bce99fa6cc9d806d4525417af76c61 not found: ID does not exist" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.309565 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.316335 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327600 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9lg\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-kube-api-access-dc9lg\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327687 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-ceph\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327835 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327858 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.327978 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-logs\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429128 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429486 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-logs\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429517 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9lg\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-kube-api-access-dc9lg\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429854 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.429892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-ceph\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.430156 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-logs\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.430191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.434366 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.434907 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.434995 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-ceph\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.435168 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.447837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9lg\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-kube-api-access-dc9lg\") pod \"glance-default-external-api-0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " pod="openstack/glance-default-external-api-0" Jan 26 19:14:38 crc kubenswrapper[4787]: I0126 19:14:38.621044 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.193394 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:14:39 crc kubenswrapper[4787]: W0126 19:14:39.202860 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bf8e19f_5564_406a_82ac_27fe75ba40c0.slice/crio-2a0ceae02f8852c2bffaca921a746644f9fb15a8969d40b0bdb515ce228c75cd WatchSource:0}: Error finding container 2a0ceae02f8852c2bffaca921a746644f9fb15a8969d40b0bdb515ce228c75cd: Status 404 returned error can't find the container with id 2a0ceae02f8852c2bffaca921a746644f9fb15a8969d40b0bdb515ce228c75cd Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.242839 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0bf8e19f-5564-406a-82ac-27fe75ba40c0","Type":"ContainerStarted","Data":"2a0ceae02f8852c2bffaca921a746644f9fb15a8969d40b0bdb515ce228c75cd"} Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.243342 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-log" containerID="cri-o://e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d" gracePeriod=30 Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.243541 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-httpd" containerID="cri-o://756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd" gracePeriod=30 Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.601593 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d0ffc8-40bf-497d-8ac9-34798379814a" path="/var/lib/kubelet/pods/f2d0ffc8-40bf-497d-8ac9-34798379814a/volumes" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.776431 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.893494 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd2tm\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-kube-api-access-hd2tm\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.893565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-combined-ca-bundle\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.893656 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-scripts\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.893735 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-logs\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.893815 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-httpd-run\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.893863 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-config-data\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.894506 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-logs" (OuterVolumeSpecName: "logs") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.894729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-ceph\") pod \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\" (UID: \"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3\") " Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.895332 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.896928 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.902826 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-ceph" (OuterVolumeSpecName: "ceph") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.902893 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-scripts" (OuterVolumeSpecName: "scripts") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.903037 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-kube-api-access-hd2tm" (OuterVolumeSpecName: "kube-api-access-hd2tm") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "kube-api-access-hd2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.922735 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.973053 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-config-data" (OuterVolumeSpecName: "config-data") pod "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" (UID: "9d1faf1e-5ba5-4864-b717-a86c7f7f89f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.996841 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd2tm\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-kube-api-access-hd2tm\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.996884 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.996898 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.996907 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.996920 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:39 crc kubenswrapper[4787]: I0126 19:14:39.996932 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255538 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerID="756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd" exitCode=0 Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255570 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerID="e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d" exitCode=143 Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255629 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3","Type":"ContainerDied","Data":"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd"} Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255660 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3","Type":"ContainerDied","Data":"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d"} Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9d1faf1e-5ba5-4864-b717-a86c7f7f89f3","Type":"ContainerDied","Data":"b4973fd87fb40a1d7b5ecb8b4c3e3aabbfd65bff2d9eb79c809553899381a2cc"} Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255679 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.255691 4787 scope.go:117] "RemoveContainer" containerID="756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.266809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0bf8e19f-5564-406a-82ac-27fe75ba40c0","Type":"ContainerStarted","Data":"81fc3b27bd24132d96a57783d5b94412cf5088038bbfb60ca3b9d698d4fba316"} Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.266853 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0bf8e19f-5564-406a-82ac-27fe75ba40c0","Type":"ContainerStarted","Data":"0ba1c0f3ed40d9d48536b6d37513a39dd74bba05e8835b6ea81f4ee5c414848c"} Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.298858 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.298831366 podStartE2EDuration="2.298831366s" podCreationTimestamp="2026-01-26 19:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:14:40.292580094 +0000 UTC m=+5448.999716237" watchObservedRunningTime="2026-01-26 19:14:40.298831366 +0000 UTC m=+5449.005967499" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.311340 4787 scope.go:117] "RemoveContainer" containerID="e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.331178 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.347411 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.352445 4787 scope.go:117] "RemoveContainer" containerID="756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd" Jan 26 19:14:40 crc kubenswrapper[4787]: E0126 19:14:40.353034 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd\": container with ID starting with 756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd not found: ID does not exist" containerID="756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.353082 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd"} err="failed to get container status \"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd\": rpc error: code = NotFound desc = could not find container \"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd\": container with ID starting with 756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd not found: ID does not exist" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.353116 4787 scope.go:117] "RemoveContainer" containerID="e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d" Jan 26 19:14:40 crc kubenswrapper[4787]: E0126 19:14:40.354090 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d\": container with ID starting with e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d not found: ID does not exist" containerID="e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.354121 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d"} err="failed to get container status \"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d\": rpc error: code = NotFound desc = could not find container \"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d\": container with ID starting with e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d not found: ID does not exist" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.354142 4787 scope.go:117] "RemoveContainer" containerID="756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.354377 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd"} err="failed to get container status \"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd\": rpc error: code = NotFound desc = could not find container \"756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd\": container with ID starting with 756d0db58949cec21ac40d1868645f8a56c6c3fa5a5c60171c2cf201d8c046bd not found: ID does not exist" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.354395 4787 scope.go:117] "RemoveContainer" containerID="e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.354617 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d"} err="failed to get container status \"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d\": rpc error: code = NotFound desc = could not find container \"e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d\": container with ID starting with e0391e8a5c2cddb4063b7ed8faddccedc00a72bead3a43577ab0912fe5b1bb3d not found: ID does not exist" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.371184 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:40 crc kubenswrapper[4787]: E0126 19:14:40.371656 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-httpd" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.371677 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-httpd" Jan 26 19:14:40 crc kubenswrapper[4787]: E0126 19:14:40.371696 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-log" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.371703 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-log" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.371896 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-log" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.371924 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" containerName="glance-httpd" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.373012 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.375328 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.380759 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506205 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-ceph\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506290 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506381 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtpn\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-kube-api-access-4rtpn\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506421 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-scripts\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506447 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-config-data\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506480 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.506499 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-logs\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.607692 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-config-data\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.607974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608065 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-logs\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608177 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-ceph\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608308 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtpn\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-kube-api-access-4rtpn\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608585 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-scripts\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-logs\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.608923 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.613630 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-ceph\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.614163 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-config-data\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.615372 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.618035 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-scripts\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.626823 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtpn\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-kube-api-access-4rtpn\") pod \"glance-default-internal-api-0\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:14:40 crc kubenswrapper[4787]: I0126 19:14:40.695832 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:41 crc kubenswrapper[4787]: I0126 19:14:41.216259 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:14:41 crc kubenswrapper[4787]: I0126 19:14:41.283181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"330b30cb-11c3-4d78-aa27-a522b770aa67","Type":"ContainerStarted","Data":"2d8dd416aa3dc08bc6a30a618e368853e784e3d5f928b3441a66310df0353701"} Jan 26 19:14:41 crc kubenswrapper[4787]: I0126 19:14:41.600828 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1faf1e-5ba5-4864-b717-a86c7f7f89f3" path="/var/lib/kubelet/pods/9d1faf1e-5ba5-4864-b717-a86c7f7f89f3/volumes" Jan 26 19:14:42 crc kubenswrapper[4787]: I0126 19:14:42.297359 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"330b30cb-11c3-4d78-aa27-a522b770aa67","Type":"ContainerStarted","Data":"2ad6023bdccb28717ea07e7d21c98782767a45516ce92c950c3ac4ab40753e28"} Jan 26 19:14:42 crc kubenswrapper[4787]: I0126 19:14:42.297657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"330b30cb-11c3-4d78-aa27-a522b770aa67","Type":"ContainerStarted","Data":"2677c97a2a2d38fe8afa249f8228504b8768f1056b0ac0a0c70a415c11870909"} Jan 26 19:14:42 crc kubenswrapper[4787]: I0126 19:14:42.320194 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.320160497 podStartE2EDuration="2.320160497s" podCreationTimestamp="2026-01-26 19:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:14:42.317365009 +0000 UTC m=+5451.024501142" watchObservedRunningTime="2026-01-26 19:14:42.320160497 +0000 UTC m=+5451.027296630" Jan 26 19:14:44 crc kubenswrapper[4787]: I0126 19:14:44.879101 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:14:44 crc kubenswrapper[4787]: I0126 19:14:44.953191 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584488c86c-rpvzc"] Jan 26 19:14:44 crc kubenswrapper[4787]: I0126 19:14:44.953460 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerName="dnsmasq-dns" containerID="cri-o://9c40ca33b19b6c3a3b431705cca0d83acc3b1b3f01f3cefe3020265fd537c5e2" gracePeriod=10 Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.331873 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerID="9c40ca33b19b6c3a3b431705cca0d83acc3b1b3f01f3cefe3020265fd537c5e2" exitCode=0 Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.331991 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" event={"ID":"d1da43e0-33bb-4e27-a2cc-04503af4d164","Type":"ContainerDied","Data":"9c40ca33b19b6c3a3b431705cca0d83acc3b1b3f01f3cefe3020265fd537c5e2"} Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.461898 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.495648 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-nb\") pod \"d1da43e0-33bb-4e27-a2cc-04503af4d164\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.495706 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-sb\") pod \"d1da43e0-33bb-4e27-a2cc-04503af4d164\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.495755 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-config\") pod \"d1da43e0-33bb-4e27-a2cc-04503af4d164\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.495806 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-dns-svc\") pod \"d1da43e0-33bb-4e27-a2cc-04503af4d164\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.496539 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtk2r\" (UniqueName: \"kubernetes.io/projected/d1da43e0-33bb-4e27-a2cc-04503af4d164-kube-api-access-jtk2r\") pod \"d1da43e0-33bb-4e27-a2cc-04503af4d164\" (UID: \"d1da43e0-33bb-4e27-a2cc-04503af4d164\") " Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.502881 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1da43e0-33bb-4e27-a2cc-04503af4d164-kube-api-access-jtk2r" (OuterVolumeSpecName: "kube-api-access-jtk2r") pod "d1da43e0-33bb-4e27-a2cc-04503af4d164" (UID: "d1da43e0-33bb-4e27-a2cc-04503af4d164"). InnerVolumeSpecName "kube-api-access-jtk2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.541859 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1da43e0-33bb-4e27-a2cc-04503af4d164" (UID: "d1da43e0-33bb-4e27-a2cc-04503af4d164"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.544603 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1da43e0-33bb-4e27-a2cc-04503af4d164" (UID: "d1da43e0-33bb-4e27-a2cc-04503af4d164"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.544727 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1da43e0-33bb-4e27-a2cc-04503af4d164" (UID: "d1da43e0-33bb-4e27-a2cc-04503af4d164"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.566293 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-config" (OuterVolumeSpecName: "config") pod "d1da43e0-33bb-4e27-a2cc-04503af4d164" (UID: "d1da43e0-33bb-4e27-a2cc-04503af4d164"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.598572 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtk2r\" (UniqueName: \"kubernetes.io/projected/d1da43e0-33bb-4e27-a2cc-04503af4d164-kube-api-access-jtk2r\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.598614 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.598625 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.598639 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:45 crc kubenswrapper[4787]: I0126 19:14:45.598652 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1da43e0-33bb-4e27-a2cc-04503af4d164-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:46 crc kubenswrapper[4787]: I0126 19:14:46.762238 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" event={"ID":"d1da43e0-33bb-4e27-a2cc-04503af4d164","Type":"ContainerDied","Data":"54d57dce6888036df2cbd58aad06262eb837747c9db0995c991a469848cebe04"} Jan 26 19:14:46 crc kubenswrapper[4787]: I0126 19:14:46.762851 4787 scope.go:117] "RemoveContainer" containerID="9c40ca33b19b6c3a3b431705cca0d83acc3b1b3f01f3cefe3020265fd537c5e2" Jan 26 19:14:46 crc kubenswrapper[4787]: I0126 19:14:46.762548 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584488c86c-rpvzc" Jan 26 19:14:46 crc kubenswrapper[4787]: I0126 19:14:46.802885 4787 scope.go:117] "RemoveContainer" containerID="e4e78bb9a0394cd016fa7b3132222a11a58e84ccb7fb0f6d34ae361eae205acd" Jan 26 19:14:46 crc kubenswrapper[4787]: I0126 19:14:46.807359 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584488c86c-rpvzc"] Jan 26 19:14:46 crc kubenswrapper[4787]: I0126 19:14:46.817718 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-584488c86c-rpvzc"] Jan 26 19:14:47 crc kubenswrapper[4787]: I0126 19:14:47.606073 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" path="/var/lib/kubelet/pods/d1da43e0-33bb-4e27-a2cc-04503af4d164/volumes" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.589308 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:14:48 crc kubenswrapper[4787]: E0126 19:14:48.589621 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.621690 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.623278 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.674697 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.690000 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.786051 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:14:48 crc kubenswrapper[4787]: I0126 19:14:48.786287 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.696767 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.697227 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.733555 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.734910 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.801659 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.801984 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.802897 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.803516 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.806294 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:14:50 crc kubenswrapper[4787]: I0126 19:14:50.809644 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:14:52 crc kubenswrapper[4787]: I0126 19:14:52.777216 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:14:52 crc kubenswrapper[4787]: I0126 19:14:52.818392 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:14:52 crc kubenswrapper[4787]: I0126 19:14:52.882346 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.155316 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9"] Jan 26 19:15:00 crc kubenswrapper[4787]: E0126 19:15:00.156064 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerName="init" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.156075 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerName="init" Jan 26 19:15:00 crc kubenswrapper[4787]: E0126 19:15:00.156089 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerName="dnsmasq-dns" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.156097 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerName="dnsmasq-dns" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.156284 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1da43e0-33bb-4e27-a2cc-04503af4d164" containerName="dnsmasq-dns" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.156917 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.163559 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.163819 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.169302 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9"] Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.271345 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-config-volume\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.271416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk46m\" (UniqueName: \"kubernetes.io/projected/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-kube-api-access-dk46m\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.271763 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-secret-volume\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.373184 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-config-volume\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.373240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk46m\" (UniqueName: \"kubernetes.io/projected/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-kube-api-access-dk46m\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.373321 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-secret-volume\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.374520 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-config-volume\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.381309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-secret-volume\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.395757 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk46m\" (UniqueName: \"kubernetes.io/projected/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-kube-api-access-dk46m\") pod \"collect-profiles-29490915-sdft9\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.431236 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vhnqg"] Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.432353 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.442674 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vhnqg"] Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.491377 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.526182 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-38b5-account-create-update-8l77l"] Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.527578 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.540645 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.545849 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-38b5-account-create-update-8l77l"] Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.578047 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97nmm\" (UniqueName: \"kubernetes.io/projected/2633f61e-8554-4bd8-968f-bc6f4b8532ab-kube-api-access-97nmm\") pod \"placement-db-create-vhnqg\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.578138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2633f61e-8554-4bd8-968f-bc6f4b8532ab-operator-scripts\") pod \"placement-db-create-vhnqg\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.683110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7twg\" (UniqueName: \"kubernetes.io/projected/2966afba-49ec-4933-9c11-878ebdf1c724-kube-api-access-m7twg\") pod \"placement-38b5-account-create-update-8l77l\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.683214 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2966afba-49ec-4933-9c11-878ebdf1c724-operator-scripts\") pod \"placement-38b5-account-create-update-8l77l\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.683290 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97nmm\" (UniqueName: \"kubernetes.io/projected/2633f61e-8554-4bd8-968f-bc6f4b8532ab-kube-api-access-97nmm\") pod \"placement-db-create-vhnqg\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.683356 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2633f61e-8554-4bd8-968f-bc6f4b8532ab-operator-scripts\") pod \"placement-db-create-vhnqg\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.684149 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2633f61e-8554-4bd8-968f-bc6f4b8532ab-operator-scripts\") pod \"placement-db-create-vhnqg\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.719676 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97nmm\" (UniqueName: \"kubernetes.io/projected/2633f61e-8554-4bd8-968f-bc6f4b8532ab-kube-api-access-97nmm\") pod \"placement-db-create-vhnqg\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.788803 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.789603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7twg\" (UniqueName: \"kubernetes.io/projected/2966afba-49ec-4933-9c11-878ebdf1c724-kube-api-access-m7twg\") pod \"placement-38b5-account-create-update-8l77l\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.789692 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2966afba-49ec-4933-9c11-878ebdf1c724-operator-scripts\") pod \"placement-38b5-account-create-update-8l77l\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.790584 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2966afba-49ec-4933-9c11-878ebdf1c724-operator-scripts\") pod \"placement-38b5-account-create-update-8l77l\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.813227 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7twg\" (UniqueName: \"kubernetes.io/projected/2966afba-49ec-4933-9c11-878ebdf1c724-kube-api-access-m7twg\") pod \"placement-38b5-account-create-update-8l77l\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:00 crc kubenswrapper[4787]: I0126 19:15:00.915864 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.162932 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9"] Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.262581 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vhnqg"] Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.484756 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-38b5-account-create-update-8l77l"] Jan 26 19:15:01 crc kubenswrapper[4787]: W0126 19:15:01.486088 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2966afba_49ec_4933_9c11_878ebdf1c724.slice/crio-a72c18c72ba89689fe4aca257af34f7b9a9921bdd894165835e5ee942dd90082 WatchSource:0}: Error finding container a72c18c72ba89689fe4aca257af34f7b9a9921bdd894165835e5ee942dd90082: Status 404 returned error can't find the container with id a72c18c72ba89689fe4aca257af34f7b9a9921bdd894165835e5ee942dd90082 Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.890063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38b5-account-create-update-8l77l" event={"ID":"2966afba-49ec-4933-9c11-878ebdf1c724","Type":"ContainerStarted","Data":"5db8df89b67cef9daf295fc187e0ffd77a966428e5f83dc79d4be4973b1981af"} Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.890105 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38b5-account-create-update-8l77l" event={"ID":"2966afba-49ec-4933-9c11-878ebdf1c724","Type":"ContainerStarted","Data":"a72c18c72ba89689fe4aca257af34f7b9a9921bdd894165835e5ee942dd90082"} Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.891630 4787 generic.go:334] "Generic (PLEG): container finished" podID="2633f61e-8554-4bd8-968f-bc6f4b8532ab" containerID="332d79b790b613ba3bb665983e627f2e882653a0dd639ae87ba41993082a6416" exitCode=0 Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.891680 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vhnqg" event={"ID":"2633f61e-8554-4bd8-968f-bc6f4b8532ab","Type":"ContainerDied","Data":"332d79b790b613ba3bb665983e627f2e882653a0dd639ae87ba41993082a6416"} Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.892039 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vhnqg" event={"ID":"2633f61e-8554-4bd8-968f-bc6f4b8532ab","Type":"ContainerStarted","Data":"2af94a56f67e7b5a47d389d263cf3f5a483311d4abaa927fa147f8e22937671b"} Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.893893 4787 generic.go:334] "Generic (PLEG): container finished" podID="343a5b16-6144-4761-8e9c-b4afbeb3dc2d" containerID="4445532f91802a828f0c8250991b94cef544289b0bd7af1f0422407877daa91e" exitCode=0 Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.893963 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" event={"ID":"343a5b16-6144-4761-8e9c-b4afbeb3dc2d","Type":"ContainerDied","Data":"4445532f91802a828f0c8250991b94cef544289b0bd7af1f0422407877daa91e"} Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.893994 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" event={"ID":"343a5b16-6144-4761-8e9c-b4afbeb3dc2d","Type":"ContainerStarted","Data":"d41a95e13d8f3315b1ba3cf16026834a7174164e9947997f041970c9cfab1c35"} Jan 26 19:15:01 crc kubenswrapper[4787]: I0126 19:15:01.909118 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-38b5-account-create-update-8l77l" podStartSLOduration=1.909102581 podStartE2EDuration="1.909102581s" podCreationTimestamp="2026-01-26 19:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:15:01.904658883 +0000 UTC m=+5470.611795016" watchObservedRunningTime="2026-01-26 19:15:01.909102581 +0000 UTC m=+5470.616238714" Jan 26 19:15:02 crc kubenswrapper[4787]: I0126 19:15:02.589848 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:15:02 crc kubenswrapper[4787]: E0126 19:15:02.590125 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:15:02 crc kubenswrapper[4787]: I0126 19:15:02.904199 4787 generic.go:334] "Generic (PLEG): container finished" podID="2966afba-49ec-4933-9c11-878ebdf1c724" containerID="5db8df89b67cef9daf295fc187e0ffd77a966428e5f83dc79d4be4973b1981af" exitCode=0 Jan 26 19:15:02 crc kubenswrapper[4787]: I0126 19:15:02.904280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38b5-account-create-update-8l77l" event={"ID":"2966afba-49ec-4933-9c11-878ebdf1c724","Type":"ContainerDied","Data":"5db8df89b67cef9daf295fc187e0ffd77a966428e5f83dc79d4be4973b1981af"} Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.312235 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.321998 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.436891 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97nmm\" (UniqueName: \"kubernetes.io/projected/2633f61e-8554-4bd8-968f-bc6f4b8532ab-kube-api-access-97nmm\") pod \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.437080 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk46m\" (UniqueName: \"kubernetes.io/projected/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-kube-api-access-dk46m\") pod \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.437244 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2633f61e-8554-4bd8-968f-bc6f4b8532ab-operator-scripts\") pod \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\" (UID: \"2633f61e-8554-4bd8-968f-bc6f4b8532ab\") " Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.437317 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-config-volume\") pod \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.437513 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-secret-volume\") pod \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\" (UID: \"343a5b16-6144-4761-8e9c-b4afbeb3dc2d\") " Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.437803 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-config-volume" (OuterVolumeSpecName: "config-volume") pod "343a5b16-6144-4761-8e9c-b4afbeb3dc2d" (UID: "343a5b16-6144-4761-8e9c-b4afbeb3dc2d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.437806 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2633f61e-8554-4bd8-968f-bc6f4b8532ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2633f61e-8554-4bd8-968f-bc6f4b8532ab" (UID: "2633f61e-8554-4bd8-968f-bc6f4b8532ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.438353 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2633f61e-8554-4bd8-968f-bc6f4b8532ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.438386 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.442666 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "343a5b16-6144-4761-8e9c-b4afbeb3dc2d" (UID: "343a5b16-6144-4761-8e9c-b4afbeb3dc2d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.442724 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-kube-api-access-dk46m" (OuterVolumeSpecName: "kube-api-access-dk46m") pod "343a5b16-6144-4761-8e9c-b4afbeb3dc2d" (UID: "343a5b16-6144-4761-8e9c-b4afbeb3dc2d"). InnerVolumeSpecName "kube-api-access-dk46m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.444117 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2633f61e-8554-4bd8-968f-bc6f4b8532ab-kube-api-access-97nmm" (OuterVolumeSpecName: "kube-api-access-97nmm") pod "2633f61e-8554-4bd8-968f-bc6f4b8532ab" (UID: "2633f61e-8554-4bd8-968f-bc6f4b8532ab"). InnerVolumeSpecName "kube-api-access-97nmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.539987 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk46m\" (UniqueName: \"kubernetes.io/projected/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-kube-api-access-dk46m\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.540027 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/343a5b16-6144-4761-8e9c-b4afbeb3dc2d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.540037 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97nmm\" (UniqueName: \"kubernetes.io/projected/2633f61e-8554-4bd8-968f-bc6f4b8532ab-kube-api-access-97nmm\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.917088 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vhnqg" event={"ID":"2633f61e-8554-4bd8-968f-bc6f4b8532ab","Type":"ContainerDied","Data":"2af94a56f67e7b5a47d389d263cf3f5a483311d4abaa927fa147f8e22937671b"} Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.917127 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2af94a56f67e7b5a47d389d263cf3f5a483311d4abaa927fa147f8e22937671b" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.917131 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vhnqg" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.919512 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.919668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9" event={"ID":"343a5b16-6144-4761-8e9c-b4afbeb3dc2d","Type":"ContainerDied","Data":"d41a95e13d8f3315b1ba3cf16026834a7174164e9947997f041970c9cfab1c35"} Jan 26 19:15:03 crc kubenswrapper[4787]: I0126 19:15:03.919788 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d41a95e13d8f3315b1ba3cf16026834a7174164e9947997f041970c9cfab1c35" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.185949 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.354902 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7twg\" (UniqueName: \"kubernetes.io/projected/2966afba-49ec-4933-9c11-878ebdf1c724-kube-api-access-m7twg\") pod \"2966afba-49ec-4933-9c11-878ebdf1c724\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.355113 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2966afba-49ec-4933-9c11-878ebdf1c724-operator-scripts\") pod \"2966afba-49ec-4933-9c11-878ebdf1c724\" (UID: \"2966afba-49ec-4933-9c11-878ebdf1c724\") " Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.356202 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2966afba-49ec-4933-9c11-878ebdf1c724-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2966afba-49ec-4933-9c11-878ebdf1c724" (UID: "2966afba-49ec-4933-9c11-878ebdf1c724"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.361522 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2966afba-49ec-4933-9c11-878ebdf1c724-kube-api-access-m7twg" (OuterVolumeSpecName: "kube-api-access-m7twg") pod "2966afba-49ec-4933-9c11-878ebdf1c724" (UID: "2966afba-49ec-4933-9c11-878ebdf1c724"). InnerVolumeSpecName "kube-api-access-m7twg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.382898 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq"] Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.390490 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-xkdwq"] Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.457141 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7twg\" (UniqueName: \"kubernetes.io/projected/2966afba-49ec-4933-9c11-878ebdf1c724-kube-api-access-m7twg\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.457174 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2966afba-49ec-4933-9c11-878ebdf1c724-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.930603 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38b5-account-create-update-8l77l" event={"ID":"2966afba-49ec-4933-9c11-878ebdf1c724","Type":"ContainerDied","Data":"a72c18c72ba89689fe4aca257af34f7b9a9921bdd894165835e5ee942dd90082"} Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.930660 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a72c18c72ba89689fe4aca257af34f7b9a9921bdd894165835e5ee942dd90082" Jan 26 19:15:04 crc kubenswrapper[4787]: I0126 19:15:04.930685 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38b5-account-create-update-8l77l" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.599827 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c470f855-7b78-4696-a92a-f499f0320def" path="/var/lib/kubelet/pods/c470f855-7b78-4696-a92a-f499f0320def/volumes" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841355 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7df7578dcc-hqbgd"] Jan 26 19:15:05 crc kubenswrapper[4787]: E0126 19:15:05.841727 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343a5b16-6144-4761-8e9c-b4afbeb3dc2d" containerName="collect-profiles" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841745 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="343a5b16-6144-4761-8e9c-b4afbeb3dc2d" containerName="collect-profiles" Jan 26 19:15:05 crc kubenswrapper[4787]: E0126 19:15:05.841761 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2633f61e-8554-4bd8-968f-bc6f4b8532ab" containerName="mariadb-database-create" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841767 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2633f61e-8554-4bd8-968f-bc6f4b8532ab" containerName="mariadb-database-create" Jan 26 19:15:05 crc kubenswrapper[4787]: E0126 19:15:05.841774 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2966afba-49ec-4933-9c11-878ebdf1c724" containerName="mariadb-account-create-update" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841781 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2966afba-49ec-4933-9c11-878ebdf1c724" containerName="mariadb-account-create-update" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841931 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2633f61e-8554-4bd8-968f-bc6f4b8532ab" containerName="mariadb-database-create" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841947 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="343a5b16-6144-4761-8e9c-b4afbeb3dc2d" containerName="collect-profiles" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.841982 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2966afba-49ec-4933-9c11-878ebdf1c724" containerName="mariadb-account-create-update" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.842777 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.857375 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df7578dcc-hqbgd"] Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.916004 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-24hgp"] Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.917266 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.919279 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.919863 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.922075 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n45b8" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.925548 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-24hgp"] Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.990866 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-nb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.991298 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-config\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.991431 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-dns-svc\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.991470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-sb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:05 crc kubenswrapper[4787]: I0126 19:15:05.991500 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpmb\" (UniqueName: \"kubernetes.io/projected/5042020b-f4a1-4f94-b15c-d20917ffa7d4-kube-api-access-6xpmb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093224 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-sb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093309 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc0338e1-d85f-4da2-a41f-7791cdfeb108-logs\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093338 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpmb\" (UniqueName: \"kubernetes.io/projected/5042020b-f4a1-4f94-b15c-d20917ffa7d4-kube-api-access-6xpmb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-scripts\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093435 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzg6k\" (UniqueName: \"kubernetes.io/projected/cc0338e1-d85f-4da2-a41f-7791cdfeb108-kube-api-access-dzg6k\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093489 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-nb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093623 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-combined-ca-bundle\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093656 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-config\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093705 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-config-data\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.093753 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-dns-svc\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.094534 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-sb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.095166 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-config\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.095179 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-nb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.095194 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-dns-svc\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.110893 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpmb\" (UniqueName: \"kubernetes.io/projected/5042020b-f4a1-4f94-b15c-d20917ffa7d4-kube-api-access-6xpmb\") pod \"dnsmasq-dns-7df7578dcc-hqbgd\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.167838 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.195881 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-combined-ca-bundle\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.196240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-config-data\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.196311 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc0338e1-d85f-4da2-a41f-7791cdfeb108-logs\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.196350 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-scripts\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.196419 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzg6k\" (UniqueName: \"kubernetes.io/projected/cc0338e1-d85f-4da2-a41f-7791cdfeb108-kube-api-access-dzg6k\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.196756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc0338e1-d85f-4da2-a41f-7791cdfeb108-logs\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.200296 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-scripts\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.200874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-config-data\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.200956 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-combined-ca-bundle\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.215908 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzg6k\" (UniqueName: \"kubernetes.io/projected/cc0338e1-d85f-4da2-a41f-7791cdfeb108-kube-api-access-dzg6k\") pod \"placement-db-sync-24hgp\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.260878 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.627364 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df7578dcc-hqbgd"] Jan 26 19:15:06 crc kubenswrapper[4787]: W0126 19:15:06.630380 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5042020b_f4a1_4f94_b15c_d20917ffa7d4.slice/crio-4dcaa0cd451231d6b5aa71b11cd6a6c939974f47a1c85536400e119df66fab42 WatchSource:0}: Error finding container 4dcaa0cd451231d6b5aa71b11cd6a6c939974f47a1c85536400e119df66fab42: Status 404 returned error can't find the container with id 4dcaa0cd451231d6b5aa71b11cd6a6c939974f47a1c85536400e119df66fab42 Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.736825 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-24hgp"] Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.953052 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-24hgp" event={"ID":"cc0338e1-d85f-4da2-a41f-7791cdfeb108","Type":"ContainerStarted","Data":"fd5934e5ff2a9a6f894fecd3c576d1a0e0be1b4d865361dde1ce28c869c6f8d2"} Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.953114 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-24hgp" event={"ID":"cc0338e1-d85f-4da2-a41f-7791cdfeb108","Type":"ContainerStarted","Data":"e1a9c1bf1a2867bd533120b3d5e08a8df4c470f7f2801d69a4e004ece0b453cc"} Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.956731 4787 generic.go:334] "Generic (PLEG): container finished" podID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerID="4c1ad5645f51c2895fc49a72523fa100948eaedcb45f7bd05fc9dfe8930f3a97" exitCode=0 Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.956775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" event={"ID":"5042020b-f4a1-4f94-b15c-d20917ffa7d4","Type":"ContainerDied","Data":"4c1ad5645f51c2895fc49a72523fa100948eaedcb45f7bd05fc9dfe8930f3a97"} Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.956806 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" event={"ID":"5042020b-f4a1-4f94-b15c-d20917ffa7d4","Type":"ContainerStarted","Data":"4dcaa0cd451231d6b5aa71b11cd6a6c939974f47a1c85536400e119df66fab42"} Jan 26 19:15:06 crc kubenswrapper[4787]: I0126 19:15:06.973938 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-24hgp" podStartSLOduration=1.973918307 podStartE2EDuration="1.973918307s" podCreationTimestamp="2026-01-26 19:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:15:06.972131144 +0000 UTC m=+5475.679267277" watchObservedRunningTime="2026-01-26 19:15:06.973918307 +0000 UTC m=+5475.681054440" Jan 26 19:15:07 crc kubenswrapper[4787]: I0126 19:15:07.970843 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" event={"ID":"5042020b-f4a1-4f94-b15c-d20917ffa7d4","Type":"ContainerStarted","Data":"d7b0151356e040fbe6a6f942a30b1a60c2013ccce193da044311ce28fd14f27e"} Jan 26 19:15:07 crc kubenswrapper[4787]: I0126 19:15:07.971097 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:08 crc kubenswrapper[4787]: I0126 19:15:08.008673 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" podStartSLOduration=3.008649876 podStartE2EDuration="3.008649876s" podCreationTimestamp="2026-01-26 19:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:15:08.002339913 +0000 UTC m=+5476.709476046" watchObservedRunningTime="2026-01-26 19:15:08.008649876 +0000 UTC m=+5476.715786009" Jan 26 19:15:08 crc kubenswrapper[4787]: I0126 19:15:08.980099 4787 generic.go:334] "Generic (PLEG): container finished" podID="cc0338e1-d85f-4da2-a41f-7791cdfeb108" containerID="fd5934e5ff2a9a6f894fecd3c576d1a0e0be1b4d865361dde1ce28c869c6f8d2" exitCode=0 Jan 26 19:15:08 crc kubenswrapper[4787]: I0126 19:15:08.980508 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-24hgp" event={"ID":"cc0338e1-d85f-4da2-a41f-7791cdfeb108","Type":"ContainerDied","Data":"fd5934e5ff2a9a6f894fecd3c576d1a0e0be1b4d865361dde1ce28c869c6f8d2"} Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.317173 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.465890 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-config-data\") pod \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.466014 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-scripts\") pod \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.466056 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzg6k\" (UniqueName: \"kubernetes.io/projected/cc0338e1-d85f-4da2-a41f-7791cdfeb108-kube-api-access-dzg6k\") pod \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.466094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-combined-ca-bundle\") pod \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.466165 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc0338e1-d85f-4da2-a41f-7791cdfeb108-logs\") pod \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\" (UID: \"cc0338e1-d85f-4da2-a41f-7791cdfeb108\") " Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.466495 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0338e1-d85f-4da2-a41f-7791cdfeb108-logs" (OuterVolumeSpecName: "logs") pod "cc0338e1-d85f-4da2-a41f-7791cdfeb108" (UID: "cc0338e1-d85f-4da2-a41f-7791cdfeb108"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.466735 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc0338e1-d85f-4da2-a41f-7791cdfeb108-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.471033 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0338e1-d85f-4da2-a41f-7791cdfeb108-kube-api-access-dzg6k" (OuterVolumeSpecName: "kube-api-access-dzg6k") pod "cc0338e1-d85f-4da2-a41f-7791cdfeb108" (UID: "cc0338e1-d85f-4da2-a41f-7791cdfeb108"). InnerVolumeSpecName "kube-api-access-dzg6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.486296 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-scripts" (OuterVolumeSpecName: "scripts") pod "cc0338e1-d85f-4da2-a41f-7791cdfeb108" (UID: "cc0338e1-d85f-4da2-a41f-7791cdfeb108"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.494809 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-config-data" (OuterVolumeSpecName: "config-data") pod "cc0338e1-d85f-4da2-a41f-7791cdfeb108" (UID: "cc0338e1-d85f-4da2-a41f-7791cdfeb108"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.502629 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc0338e1-d85f-4da2-a41f-7791cdfeb108" (UID: "cc0338e1-d85f-4da2-a41f-7791cdfeb108"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.567946 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.568002 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzg6k\" (UniqueName: \"kubernetes.io/projected/cc0338e1-d85f-4da2-a41f-7791cdfeb108-kube-api-access-dzg6k\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.568018 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.568029 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0338e1-d85f-4da2-a41f-7791cdfeb108-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.999074 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-24hgp" event={"ID":"cc0338e1-d85f-4da2-a41f-7791cdfeb108","Type":"ContainerDied","Data":"e1a9c1bf1a2867bd533120b3d5e08a8df4c470f7f2801d69a4e004ece0b453cc"} Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.999121 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1a9c1bf1a2867bd533120b3d5e08a8df4c470f7f2801d69a4e004ece0b453cc" Jan 26 19:15:10 crc kubenswrapper[4787]: I0126 19:15:10.999143 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-24hgp" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.077635 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d5ff996bd-vhgds"] Jan 26 19:15:11 crc kubenswrapper[4787]: E0126 19:15:11.077987 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0338e1-d85f-4da2-a41f-7791cdfeb108" containerName="placement-db-sync" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.078005 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0338e1-d85f-4da2-a41f-7791cdfeb108" containerName="placement-db-sync" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.078163 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0338e1-d85f-4da2-a41f-7791cdfeb108" containerName="placement-db-sync" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.079026 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.081282 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.082069 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n45b8" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.084466 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.096470 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d5ff996bd-vhgds"] Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.177857 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-config-data\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.178250 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-combined-ca-bundle\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.178411 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/737ad08a-5871-45a8-b16f-085d03fbaba4-logs\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.178474 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8r4\" (UniqueName: \"kubernetes.io/projected/737ad08a-5871-45a8-b16f-085d03fbaba4-kube-api-access-vr8r4\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.178636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-scripts\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.279702 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-scripts\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.280142 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-config-data\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.280292 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-combined-ca-bundle\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.280435 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/737ad08a-5871-45a8-b16f-085d03fbaba4-logs\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.280538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8r4\" (UniqueName: \"kubernetes.io/projected/737ad08a-5871-45a8-b16f-085d03fbaba4-kube-api-access-vr8r4\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.280822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/737ad08a-5871-45a8-b16f-085d03fbaba4-logs\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.285537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-config-data\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.291290 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-scripts\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.292644 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737ad08a-5871-45a8-b16f-085d03fbaba4-combined-ca-bundle\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.312681 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8r4\" (UniqueName: \"kubernetes.io/projected/737ad08a-5871-45a8-b16f-085d03fbaba4-kube-api-access-vr8r4\") pod \"placement-7d5ff996bd-vhgds\" (UID: \"737ad08a-5871-45a8-b16f-085d03fbaba4\") " pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.398514 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:11 crc kubenswrapper[4787]: W0126 19:15:11.964877 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod737ad08a_5871_45a8_b16f_085d03fbaba4.slice/crio-a39db7300fae0465d88c697d7fd7fb941da29bd76c6de4ff64e221e02db40377 WatchSource:0}: Error finding container a39db7300fae0465d88c697d7fd7fb941da29bd76c6de4ff64e221e02db40377: Status 404 returned error can't find the container with id a39db7300fae0465d88c697d7fd7fb941da29bd76c6de4ff64e221e02db40377 Jan 26 19:15:11 crc kubenswrapper[4787]: I0126 19:15:11.998377 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d5ff996bd-vhgds"] Jan 26 19:15:12 crc kubenswrapper[4787]: I0126 19:15:12.021134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d5ff996bd-vhgds" event={"ID":"737ad08a-5871-45a8-b16f-085d03fbaba4","Type":"ContainerStarted","Data":"a39db7300fae0465d88c697d7fd7fb941da29bd76c6de4ff64e221e02db40377"} Jan 26 19:15:13 crc kubenswrapper[4787]: I0126 19:15:13.034044 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d5ff996bd-vhgds" event={"ID":"737ad08a-5871-45a8-b16f-085d03fbaba4","Type":"ContainerStarted","Data":"9c6d9247a6bcf4cc32e292b635506ac0c529a3b2b6270fb83ad4628698addd5e"} Jan 26 19:15:13 crc kubenswrapper[4787]: I0126 19:15:13.034443 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:13 crc kubenswrapper[4787]: I0126 19:15:13.034489 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d5ff996bd-vhgds" event={"ID":"737ad08a-5871-45a8-b16f-085d03fbaba4","Type":"ContainerStarted","Data":"392f4c48f7db66c6507bcd6add6444eb9f35ac6ac68866d01683add6de8e1e81"} Jan 26 19:15:13 crc kubenswrapper[4787]: I0126 19:15:13.061982 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d5ff996bd-vhgds" podStartSLOduration=2.061911561 podStartE2EDuration="2.061911561s" podCreationTimestamp="2026-01-26 19:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:15:13.054597284 +0000 UTC m=+5481.761733457" watchObservedRunningTime="2026-01-26 19:15:13.061911561 +0000 UTC m=+5481.769047734" Jan 26 19:15:14 crc kubenswrapper[4787]: I0126 19:15:14.042912 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:15 crc kubenswrapper[4787]: I0126 19:15:15.589469 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:15:15 crc kubenswrapper[4787]: E0126 19:15:15.590012 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.169204 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.240462 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576b69787-qtd6d"] Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.240681 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" podUID="2cf1851f-f362-457b-a929-bad659867628" containerName="dnsmasq-dns" containerID="cri-o://566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd" gracePeriod=10 Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.744411 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.782632 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-nb\") pod \"2cf1851f-f362-457b-a929-bad659867628\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.782672 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-dns-svc\") pod \"2cf1851f-f362-457b-a929-bad659867628\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.782710 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-sb\") pod \"2cf1851f-f362-457b-a929-bad659867628\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.782755 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-config\") pod \"2cf1851f-f362-457b-a929-bad659867628\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.782779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc4pg\" (UniqueName: \"kubernetes.io/projected/2cf1851f-f362-457b-a929-bad659867628-kube-api-access-gc4pg\") pod \"2cf1851f-f362-457b-a929-bad659867628\" (UID: \"2cf1851f-f362-457b-a929-bad659867628\") " Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.788456 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf1851f-f362-457b-a929-bad659867628-kube-api-access-gc4pg" (OuterVolumeSpecName: "kube-api-access-gc4pg") pod "2cf1851f-f362-457b-a929-bad659867628" (UID: "2cf1851f-f362-457b-a929-bad659867628"). InnerVolumeSpecName "kube-api-access-gc4pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.821860 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cf1851f-f362-457b-a929-bad659867628" (UID: "2cf1851f-f362-457b-a929-bad659867628"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.825431 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-config" (OuterVolumeSpecName: "config") pod "2cf1851f-f362-457b-a929-bad659867628" (UID: "2cf1851f-f362-457b-a929-bad659867628"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.825535 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cf1851f-f362-457b-a929-bad659867628" (UID: "2cf1851f-f362-457b-a929-bad659867628"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.830991 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cf1851f-f362-457b-a929-bad659867628" (UID: "2cf1851f-f362-457b-a929-bad659867628"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.887054 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.887093 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc4pg\" (UniqueName: \"kubernetes.io/projected/2cf1851f-f362-457b-a929-bad659867628-kube-api-access-gc4pg\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.887166 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.887176 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:16 crc kubenswrapper[4787]: I0126 19:15:16.887185 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf1851f-f362-457b-a929-bad659867628-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.066632 4787 generic.go:334] "Generic (PLEG): container finished" podID="2cf1851f-f362-457b-a929-bad659867628" containerID="566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd" exitCode=0 Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.066680 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.066713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" event={"ID":"2cf1851f-f362-457b-a929-bad659867628","Type":"ContainerDied","Data":"566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd"} Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.066765 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576b69787-qtd6d" event={"ID":"2cf1851f-f362-457b-a929-bad659867628","Type":"ContainerDied","Data":"005a5cf85a891311b7429d2ee81df20ceee1c3e22dd1e240cbfae27605856902"} Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.066788 4787 scope.go:117] "RemoveContainer" containerID="566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.087630 4787 scope.go:117] "RemoveContainer" containerID="12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.100318 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576b69787-qtd6d"] Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.109795 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576b69787-qtd6d"] Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.112010 4787 scope.go:117] "RemoveContainer" containerID="566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd" Jan 26 19:15:17 crc kubenswrapper[4787]: E0126 19:15:17.112461 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd\": container with ID starting with 566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd not found: ID does not exist" containerID="566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.112518 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd"} err="failed to get container status \"566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd\": rpc error: code = NotFound desc = could not find container \"566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd\": container with ID starting with 566754a251c8416059e588010bdae639bb31ba10107f92690118bdce5d1a8edd not found: ID does not exist" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.112546 4787 scope.go:117] "RemoveContainer" containerID="12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da" Jan 26 19:15:17 crc kubenswrapper[4787]: E0126 19:15:17.113907 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da\": container with ID starting with 12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da not found: ID does not exist" containerID="12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.113965 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da"} err="failed to get container status \"12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da\": rpc error: code = NotFound desc = could not find container \"12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da\": container with ID starting with 12b146d71eaf0d7dc7884ef762673a751af2f412e7cc5887914e068431f160da not found: ID does not exist" Jan 26 19:15:17 crc kubenswrapper[4787]: I0126 19:15:17.598575 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf1851f-f362-457b-a929-bad659867628" path="/var/lib/kubelet/pods/2cf1851f-f362-457b-a929-bad659867628/volumes" Jan 26 19:15:28 crc kubenswrapper[4787]: I0126 19:15:28.589553 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:15:28 crc kubenswrapper[4787]: E0126 19:15:28.590314 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:15:41 crc kubenswrapper[4787]: I0126 19:15:41.594493 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:15:41 crc kubenswrapper[4787]: E0126 19:15:41.595375 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:15:42 crc kubenswrapper[4787]: I0126 19:15:42.617522 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:42 crc kubenswrapper[4787]: I0126 19:15:42.621006 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d5ff996bd-vhgds" Jan 26 19:15:52 crc kubenswrapper[4787]: I0126 19:15:52.589679 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:15:52 crc kubenswrapper[4787]: E0126 19:15:52.590368 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:15:58 crc kubenswrapper[4787]: I0126 19:15:58.573340 4787 scope.go:117] "RemoveContainer" containerID="ea04f13cd8f3ae5b5ea4bc9f09b85351eaf43bdf8c2e5d77ab6ef07b72adff6d" Jan 26 19:15:58 crc kubenswrapper[4787]: I0126 19:15:58.600406 4787 scope.go:117] "RemoveContainer" containerID="9c42500fe69b59b61ac4f021cadac3e6f3a7b4458dc85571dd95b2e932070b5f" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.275703 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pphlr"] Jan 26 19:16:03 crc kubenswrapper[4787]: E0126 19:16:03.276803 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf1851f-f362-457b-a929-bad659867628" containerName="init" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.276819 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf1851f-f362-457b-a929-bad659867628" containerName="init" Jan 26 19:16:03 crc kubenswrapper[4787]: E0126 19:16:03.276884 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf1851f-f362-457b-a929-bad659867628" containerName="dnsmasq-dns" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.276894 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf1851f-f362-457b-a929-bad659867628" containerName="dnsmasq-dns" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.277174 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf1851f-f362-457b-a929-bad659867628" containerName="dnsmasq-dns" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.278741 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.288478 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pphlr"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.360249 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7hrtx"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.361461 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.376733 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7hrtx"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.392877 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv8rf\" (UniqueName: \"kubernetes.io/projected/81ed44f7-bff4-4567-8a8c-0a82410634e2-kube-api-access-cv8rf\") pod \"nova-api-db-create-pphlr\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.393394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ed44f7-bff4-4567-8a8c-0a82410634e2-operator-scripts\") pod \"nova-api-db-create-pphlr\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.477626 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6f01-account-create-update-r82zn"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.478843 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.481541 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.485494 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-p64bx"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.486622 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.495627 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrgl\" (UniqueName: \"kubernetes.io/projected/1bf4c986-f22b-4694-a10c-36fb04249541-kube-api-access-4mrgl\") pod \"nova-cell0-db-create-7hrtx\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.495690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv8rf\" (UniqueName: \"kubernetes.io/projected/81ed44f7-bff4-4567-8a8c-0a82410634e2-kube-api-access-cv8rf\") pod \"nova-api-db-create-pphlr\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.495771 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf4c986-f22b-4694-a10c-36fb04249541-operator-scripts\") pod \"nova-cell0-db-create-7hrtx\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.495856 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ed44f7-bff4-4567-8a8c-0a82410634e2-operator-scripts\") pod \"nova-api-db-create-pphlr\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.496188 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f01-account-create-update-r82zn"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.496784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ed44f7-bff4-4567-8a8c-0a82410634e2-operator-scripts\") pod \"nova-api-db-create-pphlr\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.511051 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p64bx"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.540210 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv8rf\" (UniqueName: \"kubernetes.io/projected/81ed44f7-bff4-4567-8a8c-0a82410634e2-kube-api-access-cv8rf\") pod \"nova-api-db-create-pphlr\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.596643 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdnj\" (UniqueName: \"kubernetes.io/projected/f04cb77e-3029-4aff-a4cb-105699eb3fbe-kube-api-access-lvdnj\") pod \"nova-api-6f01-account-create-update-r82zn\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.597023 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrgl\" (UniqueName: \"kubernetes.io/projected/1bf4c986-f22b-4694-a10c-36fb04249541-kube-api-access-4mrgl\") pod \"nova-cell0-db-create-7hrtx\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.597373 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-operator-scripts\") pod \"nova-cell1-db-create-p64bx\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.597445 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04cb77e-3029-4aff-a4cb-105699eb3fbe-operator-scripts\") pod \"nova-api-6f01-account-create-update-r82zn\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.597490 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf4c986-f22b-4694-a10c-36fb04249541-operator-scripts\") pod \"nova-cell0-db-create-7hrtx\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.597614 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwt48\" (UniqueName: \"kubernetes.io/projected/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-kube-api-access-fwt48\") pod \"nova-cell1-db-create-p64bx\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.598447 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf4c986-f22b-4694-a10c-36fb04249541-operator-scripts\") pod \"nova-cell0-db-create-7hrtx\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.598878 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.615570 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrgl\" (UniqueName: \"kubernetes.io/projected/1bf4c986-f22b-4694-a10c-36fb04249541-kube-api-access-4mrgl\") pod \"nova-cell0-db-create-7hrtx\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.673491 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f84a-account-create-update-l89g9"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.674843 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.677281 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.678663 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.689421 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f84a-account-create-update-l89g9"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.703008 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwt48\" (UniqueName: \"kubernetes.io/projected/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-kube-api-access-fwt48\") pod \"nova-cell1-db-create-p64bx\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.703290 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdnj\" (UniqueName: \"kubernetes.io/projected/f04cb77e-3029-4aff-a4cb-105699eb3fbe-kube-api-access-lvdnj\") pod \"nova-api-6f01-account-create-update-r82zn\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.703489 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-operator-scripts\") pod \"nova-cell1-db-create-p64bx\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.703666 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04cb77e-3029-4aff-a4cb-105699eb3fbe-operator-scripts\") pod \"nova-api-6f01-account-create-update-r82zn\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.704650 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04cb77e-3029-4aff-a4cb-105699eb3fbe-operator-scripts\") pod \"nova-api-6f01-account-create-update-r82zn\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.705026 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-operator-scripts\") pod \"nova-cell1-db-create-p64bx\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.742191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdnj\" (UniqueName: \"kubernetes.io/projected/f04cb77e-3029-4aff-a4cb-105699eb3fbe-kube-api-access-lvdnj\") pod \"nova-api-6f01-account-create-update-r82zn\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.742588 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwt48\" (UniqueName: \"kubernetes.io/projected/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-kube-api-access-fwt48\") pod \"nova-cell1-db-create-p64bx\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.801584 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.805741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad611fd-34d0-4806-8eab-01472e06fd17-operator-scripts\") pod \"nova-cell0-f84a-account-create-update-l89g9\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.805790 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjnf\" (UniqueName: \"kubernetes.io/projected/aad611fd-34d0-4806-8eab-01472e06fd17-kube-api-access-dhjnf\") pod \"nova-cell0-f84a-account-create-update-l89g9\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.814982 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.910328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad611fd-34d0-4806-8eab-01472e06fd17-operator-scripts\") pod \"nova-cell0-f84a-account-create-update-l89g9\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.914813 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjnf\" (UniqueName: \"kubernetes.io/projected/aad611fd-34d0-4806-8eab-01472e06fd17-kube-api-access-dhjnf\") pod \"nova-cell0-f84a-account-create-update-l89g9\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.912313 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad611fd-34d0-4806-8eab-01472e06fd17-operator-scripts\") pod \"nova-cell0-f84a-account-create-update-l89g9\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.921728 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f61a-account-create-update-74jw7"] Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.923498 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.932409 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.943672 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjnf\" (UniqueName: \"kubernetes.io/projected/aad611fd-34d0-4806-8eab-01472e06fd17-kube-api-access-dhjnf\") pod \"nova-cell0-f84a-account-create-update-l89g9\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:03 crc kubenswrapper[4787]: I0126 19:16:03.961334 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f61a-account-create-update-74jw7"] Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.007305 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.022462 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f90eeb4-4e9a-4568-8280-357f44085201-operator-scripts\") pod \"nova-cell1-f61a-account-create-update-74jw7\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.022511 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5cz\" (UniqueName: \"kubernetes.io/projected/0f90eeb4-4e9a-4568-8280-357f44085201-kube-api-access-bf5cz\") pod \"nova-cell1-f61a-account-create-update-74jw7\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.125463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f90eeb4-4e9a-4568-8280-357f44085201-operator-scripts\") pod \"nova-cell1-f61a-account-create-update-74jw7\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.125513 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5cz\" (UniqueName: \"kubernetes.io/projected/0f90eeb4-4e9a-4568-8280-357f44085201-kube-api-access-bf5cz\") pod \"nova-cell1-f61a-account-create-update-74jw7\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.126292 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f90eeb4-4e9a-4568-8280-357f44085201-operator-scripts\") pod \"nova-cell1-f61a-account-create-update-74jw7\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.148624 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5cz\" (UniqueName: \"kubernetes.io/projected/0f90eeb4-4e9a-4568-8280-357f44085201-kube-api-access-bf5cz\") pod \"nova-cell1-f61a-account-create-update-74jw7\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.187535 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pphlr"] Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.284483 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.357345 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7hrtx"] Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.456794 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f01-account-create-update-r82zn"] Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.460238 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pphlr" event={"ID":"81ed44f7-bff4-4567-8a8c-0a82410634e2","Type":"ContainerStarted","Data":"fd948265542fa97da094b3360367adc46582c13d938a220b156403d55aab4932"} Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.473445 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7hrtx" event={"ID":"1bf4c986-f22b-4694-a10c-36fb04249541","Type":"ContainerStarted","Data":"c38905216c119f2ae066f09773899ece3ec08597cc7d740acf9708d1c7be9f2d"} Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.475878 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p64bx"] Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.712292 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f84a-account-create-update-l89g9"] Jan 26 19:16:04 crc kubenswrapper[4787]: I0126 19:16:04.860920 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f61a-account-create-update-74jw7"] Jan 26 19:16:04 crc kubenswrapper[4787]: W0126 19:16:04.862893 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f90eeb4_4e9a_4568_8280_357f44085201.slice/crio-4ba3675d2dc2f903d7345d28fe21770c34a049c183cb8c7bf9717c0a358e1085 WatchSource:0}: Error finding container 4ba3675d2dc2f903d7345d28fe21770c34a049c183cb8c7bf9717c0a358e1085: Status 404 returned error can't find the container with id 4ba3675d2dc2f903d7345d28fe21770c34a049c183cb8c7bf9717c0a358e1085 Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.482988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p64bx" event={"ID":"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b","Type":"ContainerStarted","Data":"e609af0091708393e9957d88a259a3b34c186c03a411a18b6d5e6095cd7740c9"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.483350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p64bx" event={"ID":"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b","Type":"ContainerStarted","Data":"13ed4f8c87326bb52afa28520555695bfcb1265afb62721844d3a32bae660ad6"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.485484 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" event={"ID":"0f90eeb4-4e9a-4568-8280-357f44085201","Type":"ContainerStarted","Data":"2c01a2f88b0829782d2bb5bbc0b71d60ac0e6eff51bd64d6820b96f04f6110c6"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.485608 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" event={"ID":"0f90eeb4-4e9a-4568-8280-357f44085201","Type":"ContainerStarted","Data":"4ba3675d2dc2f903d7345d28fe21770c34a049c183cb8c7bf9717c0a358e1085"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.487754 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pphlr" event={"ID":"81ed44f7-bff4-4567-8a8c-0a82410634e2","Type":"ContainerStarted","Data":"41b301526222a9249c832896347d44b5428a597b4bdaadfb8a9cc0bc8f803910"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.489213 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f01-account-create-update-r82zn" event={"ID":"f04cb77e-3029-4aff-a4cb-105699eb3fbe","Type":"ContainerStarted","Data":"673db101059efdbcc8c49046a454d4c0c8313ec08e58b651f8b307c03bcd2208"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.489236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f01-account-create-update-r82zn" event={"ID":"f04cb77e-3029-4aff-a4cb-105699eb3fbe","Type":"ContainerStarted","Data":"95505f437b343d082b8f16b399b4950bb5c841f1023a66414bde7c4ed2b735e6"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.490749 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" event={"ID":"aad611fd-34d0-4806-8eab-01472e06fd17","Type":"ContainerStarted","Data":"b422a913941acedeb5540c3e2b1df7f3b891591f7ede706236a07eb5b16421a3"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.490779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" event={"ID":"aad611fd-34d0-4806-8eab-01472e06fd17","Type":"ContainerStarted","Data":"0cce84ffe8b52bdc973841dbfd487e472e0bfb8763af768bc60449b7493cba52"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.492274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7hrtx" event={"ID":"1bf4c986-f22b-4694-a10c-36fb04249541","Type":"ContainerStarted","Data":"975160d504f09c444df8ae765688296320ddfe408a1eb16a514242c452a9d56a"} Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.522602 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-pphlr" podStartSLOduration=2.522583332 podStartE2EDuration="2.522583332s" podCreationTimestamp="2026-01-26 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:05.514378972 +0000 UTC m=+5534.221515105" watchObservedRunningTime="2026-01-26 19:16:05.522583332 +0000 UTC m=+5534.229719465" Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.522686 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-p64bx" podStartSLOduration=2.522681304 podStartE2EDuration="2.522681304s" podCreationTimestamp="2026-01-26 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:05.499706415 +0000 UTC m=+5534.206842548" watchObservedRunningTime="2026-01-26 19:16:05.522681304 +0000 UTC m=+5534.229817437" Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.529111 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-7hrtx" podStartSLOduration=2.529094799 podStartE2EDuration="2.529094799s" podCreationTimestamp="2026-01-26 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:05.527686285 +0000 UTC m=+5534.234822428" watchObservedRunningTime="2026-01-26 19:16:05.529094799 +0000 UTC m=+5534.236230942" Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.543829 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-6f01-account-create-update-r82zn" podStartSLOduration=2.543809107 podStartE2EDuration="2.543809107s" podCreationTimestamp="2026-01-26 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:05.540303472 +0000 UTC m=+5534.247439615" watchObservedRunningTime="2026-01-26 19:16:05.543809107 +0000 UTC m=+5534.250945240" Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.560745 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" podStartSLOduration=2.560725338 podStartE2EDuration="2.560725338s" podCreationTimestamp="2026-01-26 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:05.559347325 +0000 UTC m=+5534.266483468" watchObservedRunningTime="2026-01-26 19:16:05.560725338 +0000 UTC m=+5534.267861471" Jan 26 19:16:05 crc kubenswrapper[4787]: I0126 19:16:05.589625 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" podStartSLOduration=2.5896047700000002 podStartE2EDuration="2.58960477s" podCreationTimestamp="2026-01-26 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:05.578407668 +0000 UTC m=+5534.285543811" watchObservedRunningTime="2026-01-26 19:16:05.58960477 +0000 UTC m=+5534.296740903" Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.503318 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" containerID="e609af0091708393e9957d88a259a3b34c186c03a411a18b6d5e6095cd7740c9" exitCode=0 Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.503403 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p64bx" event={"ID":"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b","Type":"ContainerDied","Data":"e609af0091708393e9957d88a259a3b34c186c03a411a18b6d5e6095cd7740c9"} Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.505938 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f90eeb4-4e9a-4568-8280-357f44085201" containerID="2c01a2f88b0829782d2bb5bbc0b71d60ac0e6eff51bd64d6820b96f04f6110c6" exitCode=0 Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.511481 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" event={"ID":"0f90eeb4-4e9a-4568-8280-357f44085201","Type":"ContainerDied","Data":"2c01a2f88b0829782d2bb5bbc0b71d60ac0e6eff51bd64d6820b96f04f6110c6"} Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.512372 4787 generic.go:334] "Generic (PLEG): container finished" podID="f04cb77e-3029-4aff-a4cb-105699eb3fbe" containerID="673db101059efdbcc8c49046a454d4c0c8313ec08e58b651f8b307c03bcd2208" exitCode=0 Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.512517 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f01-account-create-update-r82zn" event={"ID":"f04cb77e-3029-4aff-a4cb-105699eb3fbe","Type":"ContainerDied","Data":"673db101059efdbcc8c49046a454d4c0c8313ec08e58b651f8b307c03bcd2208"} Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.514401 4787 generic.go:334] "Generic (PLEG): container finished" podID="81ed44f7-bff4-4567-8a8c-0a82410634e2" containerID="41b301526222a9249c832896347d44b5428a597b4bdaadfb8a9cc0bc8f803910" exitCode=0 Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.514453 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pphlr" event={"ID":"81ed44f7-bff4-4567-8a8c-0a82410634e2","Type":"ContainerDied","Data":"41b301526222a9249c832896347d44b5428a597b4bdaadfb8a9cc0bc8f803910"} Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.515983 4787 generic.go:334] "Generic (PLEG): container finished" podID="aad611fd-34d0-4806-8eab-01472e06fd17" containerID="b422a913941acedeb5540c3e2b1df7f3b891591f7ede706236a07eb5b16421a3" exitCode=0 Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.516029 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" event={"ID":"aad611fd-34d0-4806-8eab-01472e06fd17","Type":"ContainerDied","Data":"b422a913941acedeb5540c3e2b1df7f3b891591f7ede706236a07eb5b16421a3"} Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.518854 4787 generic.go:334] "Generic (PLEG): container finished" podID="1bf4c986-f22b-4694-a10c-36fb04249541" containerID="975160d504f09c444df8ae765688296320ddfe408a1eb16a514242c452a9d56a" exitCode=0 Jan 26 19:16:06 crc kubenswrapper[4787]: I0126 19:16:06.518887 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7hrtx" event={"ID":"1bf4c986-f22b-4694-a10c-36fb04249541","Type":"ContainerDied","Data":"975160d504f09c444df8ae765688296320ddfe408a1eb16a514242c452a9d56a"} Jan 26 19:16:07 crc kubenswrapper[4787]: I0126 19:16:07.589771 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:16:07 crc kubenswrapper[4787]: E0126 19:16:07.590048 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:16:07 crc kubenswrapper[4787]: I0126 19:16:07.912298 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:07 crc kubenswrapper[4787]: I0126 19:16:07.995986 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-operator-scripts\") pod \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " Jan 26 19:16:07 crc kubenswrapper[4787]: I0126 19:16:07.996094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwt48\" (UniqueName: \"kubernetes.io/projected/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-kube-api-access-fwt48\") pod \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\" (UID: \"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b\") " Jan 26 19:16:07 crc kubenswrapper[4787]: I0126 19:16:07.997888 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" (UID: "9ed94e4d-5ab3-40f0-9cd2-d95ea215370b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.007246 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-kube-api-access-fwt48" (OuterVolumeSpecName: "kube-api-access-fwt48") pod "9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" (UID: "9ed94e4d-5ab3-40f0-9cd2-d95ea215370b"). InnerVolumeSpecName "kube-api-access-fwt48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.098815 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwt48\" (UniqueName: \"kubernetes.io/projected/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-kube-api-access-fwt48\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.098866 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.189594 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.228817 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.238439 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.258898 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.274107 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.301371 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv8rf\" (UniqueName: \"kubernetes.io/projected/81ed44f7-bff4-4567-8a8c-0a82410634e2-kube-api-access-cv8rf\") pod \"81ed44f7-bff4-4567-8a8c-0a82410634e2\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.301429 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mrgl\" (UniqueName: \"kubernetes.io/projected/1bf4c986-f22b-4694-a10c-36fb04249541-kube-api-access-4mrgl\") pod \"1bf4c986-f22b-4694-a10c-36fb04249541\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.301462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ed44f7-bff4-4567-8a8c-0a82410634e2-operator-scripts\") pod \"81ed44f7-bff4-4567-8a8c-0a82410634e2\" (UID: \"81ed44f7-bff4-4567-8a8c-0a82410634e2\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.301581 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf4c986-f22b-4694-a10c-36fb04249541-operator-scripts\") pod \"1bf4c986-f22b-4694-a10c-36fb04249541\" (UID: \"1bf4c986-f22b-4694-a10c-36fb04249541\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.301652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad611fd-34d0-4806-8eab-01472e06fd17-operator-scripts\") pod \"aad611fd-34d0-4806-8eab-01472e06fd17\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.301720 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhjnf\" (UniqueName: \"kubernetes.io/projected/aad611fd-34d0-4806-8eab-01472e06fd17-kube-api-access-dhjnf\") pod \"aad611fd-34d0-4806-8eab-01472e06fd17\" (UID: \"aad611fd-34d0-4806-8eab-01472e06fd17\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.302187 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf4c986-f22b-4694-a10c-36fb04249541-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bf4c986-f22b-4694-a10c-36fb04249541" (UID: "1bf4c986-f22b-4694-a10c-36fb04249541"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.302289 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed44f7-bff4-4567-8a8c-0a82410634e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81ed44f7-bff4-4567-8a8c-0a82410634e2" (UID: "81ed44f7-bff4-4567-8a8c-0a82410634e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.302483 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad611fd-34d0-4806-8eab-01472e06fd17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aad611fd-34d0-4806-8eab-01472e06fd17" (UID: "aad611fd-34d0-4806-8eab-01472e06fd17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.302634 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ed44f7-bff4-4567-8a8c-0a82410634e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.302652 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf4c986-f22b-4694-a10c-36fb04249541-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.302685 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad611fd-34d0-4806-8eab-01472e06fd17-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.306550 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf4c986-f22b-4694-a10c-36fb04249541-kube-api-access-4mrgl" (OuterVolumeSpecName: "kube-api-access-4mrgl") pod "1bf4c986-f22b-4694-a10c-36fb04249541" (UID: "1bf4c986-f22b-4694-a10c-36fb04249541"). InnerVolumeSpecName "kube-api-access-4mrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.306934 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ed44f7-bff4-4567-8a8c-0a82410634e2-kube-api-access-cv8rf" (OuterVolumeSpecName: "kube-api-access-cv8rf") pod "81ed44f7-bff4-4567-8a8c-0a82410634e2" (UID: "81ed44f7-bff4-4567-8a8c-0a82410634e2"). InnerVolumeSpecName "kube-api-access-cv8rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.309095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad611fd-34d0-4806-8eab-01472e06fd17-kube-api-access-dhjnf" (OuterVolumeSpecName: "kube-api-access-dhjnf") pod "aad611fd-34d0-4806-8eab-01472e06fd17" (UID: "aad611fd-34d0-4806-8eab-01472e06fd17"). InnerVolumeSpecName "kube-api-access-dhjnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404001 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04cb77e-3029-4aff-a4cb-105699eb3fbe-operator-scripts\") pod \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404097 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5cz\" (UniqueName: \"kubernetes.io/projected/0f90eeb4-4e9a-4568-8280-357f44085201-kube-api-access-bf5cz\") pod \"0f90eeb4-4e9a-4568-8280-357f44085201\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404119 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f90eeb4-4e9a-4568-8280-357f44085201-operator-scripts\") pod \"0f90eeb4-4e9a-4568-8280-357f44085201\" (UID: \"0f90eeb4-4e9a-4568-8280-357f44085201\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404193 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvdnj\" (UniqueName: \"kubernetes.io/projected/f04cb77e-3029-4aff-a4cb-105699eb3fbe-kube-api-access-lvdnj\") pod \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\" (UID: \"f04cb77e-3029-4aff-a4cb-105699eb3fbe\") " Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404485 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04cb77e-3029-4aff-a4cb-105699eb3fbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f04cb77e-3029-4aff-a4cb-105699eb3fbe" (UID: "f04cb77e-3029-4aff-a4cb-105699eb3fbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404861 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04cb77e-3029-4aff-a4cb-105699eb3fbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404885 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhjnf\" (UniqueName: \"kubernetes.io/projected/aad611fd-34d0-4806-8eab-01472e06fd17-kube-api-access-dhjnf\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404896 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv8rf\" (UniqueName: \"kubernetes.io/projected/81ed44f7-bff4-4567-8a8c-0a82410634e2-kube-api-access-cv8rf\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.404905 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mrgl\" (UniqueName: \"kubernetes.io/projected/1bf4c986-f22b-4694-a10c-36fb04249541-kube-api-access-4mrgl\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.405319 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f90eeb4-4e9a-4568-8280-357f44085201-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f90eeb4-4e9a-4568-8280-357f44085201" (UID: "0f90eeb4-4e9a-4568-8280-357f44085201"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.406880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f90eeb4-4e9a-4568-8280-357f44085201-kube-api-access-bf5cz" (OuterVolumeSpecName: "kube-api-access-bf5cz") pod "0f90eeb4-4e9a-4568-8280-357f44085201" (UID: "0f90eeb4-4e9a-4568-8280-357f44085201"). InnerVolumeSpecName "kube-api-access-bf5cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.407326 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04cb77e-3029-4aff-a4cb-105699eb3fbe-kube-api-access-lvdnj" (OuterVolumeSpecName: "kube-api-access-lvdnj") pod "f04cb77e-3029-4aff-a4cb-105699eb3fbe" (UID: "f04cb77e-3029-4aff-a4cb-105699eb3fbe"). InnerVolumeSpecName "kube-api-access-lvdnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.507085 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5cz\" (UniqueName: \"kubernetes.io/projected/0f90eeb4-4e9a-4568-8280-357f44085201-kube-api-access-bf5cz\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.507127 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f90eeb4-4e9a-4568-8280-357f44085201-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.507140 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvdnj\" (UniqueName: \"kubernetes.io/projected/f04cb77e-3029-4aff-a4cb-105699eb3fbe-kube-api-access-lvdnj\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.557393 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" event={"ID":"aad611fd-34d0-4806-8eab-01472e06fd17","Type":"ContainerDied","Data":"0cce84ffe8b52bdc973841dbfd487e472e0bfb8763af768bc60449b7493cba52"} Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.557453 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cce84ffe8b52bdc973841dbfd487e472e0bfb8763af768bc60449b7493cba52" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.557785 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f84a-account-create-update-l89g9" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.559447 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7hrtx" event={"ID":"1bf4c986-f22b-4694-a10c-36fb04249541","Type":"ContainerDied","Data":"c38905216c119f2ae066f09773899ece3ec08597cc7d740acf9708d1c7be9f2d"} Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.559479 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38905216c119f2ae066f09773899ece3ec08597cc7d740acf9708d1c7be9f2d" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.559485 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7hrtx" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.561061 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p64bx" event={"ID":"9ed94e4d-5ab3-40f0-9cd2-d95ea215370b","Type":"ContainerDied","Data":"13ed4f8c87326bb52afa28520555695bfcb1265afb62721844d3a32bae660ad6"} Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.561114 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13ed4f8c87326bb52afa28520555695bfcb1265afb62721844d3a32bae660ad6" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.561081 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p64bx" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.564408 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.564634 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f61a-account-create-update-74jw7" event={"ID":"0f90eeb4-4e9a-4568-8280-357f44085201","Type":"ContainerDied","Data":"4ba3675d2dc2f903d7345d28fe21770c34a049c183cb8c7bf9717c0a358e1085"} Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.564673 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba3675d2dc2f903d7345d28fe21770c34a049c183cb8c7bf9717c0a358e1085" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.566688 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pphlr" event={"ID":"81ed44f7-bff4-4567-8a8c-0a82410634e2","Type":"ContainerDied","Data":"fd948265542fa97da094b3360367adc46582c13d938a220b156403d55aab4932"} Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.566711 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd948265542fa97da094b3360367adc46582c13d938a220b156403d55aab4932" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.566790 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pphlr" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.574754 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f01-account-create-update-r82zn" event={"ID":"f04cb77e-3029-4aff-a4cb-105699eb3fbe","Type":"ContainerDied","Data":"95505f437b343d082b8f16b399b4950bb5c841f1023a66414bde7c4ed2b735e6"} Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.574798 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95505f437b343d082b8f16b399b4950bb5c841f1023a66414bde7c4ed2b735e6" Jan 26 19:16:08 crc kubenswrapper[4787]: I0126 19:16:08.574867 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f01-account-create-update-r82zn" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.920495 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfhzc"] Jan 26 19:16:13 crc kubenswrapper[4787]: E0126 19:16:13.921348 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04cb77e-3029-4aff-a4cb-105699eb3fbe" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921363 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04cb77e-3029-4aff-a4cb-105699eb3fbe" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: E0126 19:16:13.921390 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f90eeb4-4e9a-4568-8280-357f44085201" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921400 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f90eeb4-4e9a-4568-8280-357f44085201" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: E0126 19:16:13.921425 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed44f7-bff4-4567-8a8c-0a82410634e2" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921434 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed44f7-bff4-4567-8a8c-0a82410634e2" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: E0126 19:16:13.921446 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf4c986-f22b-4694-a10c-36fb04249541" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921454 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf4c986-f22b-4694-a10c-36fb04249541" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: E0126 19:16:13.921468 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad611fd-34d0-4806-8eab-01472e06fd17" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921476 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad611fd-34d0-4806-8eab-01472e06fd17" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: E0126 19:16:13.921484 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921491 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921674 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921686 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf4c986-f22b-4694-a10c-36fb04249541" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921703 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ed44f7-bff4-4567-8a8c-0a82410634e2" containerName="mariadb-database-create" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921718 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad611fd-34d0-4806-8eab-01472e06fd17" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921728 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04cb77e-3029-4aff-a4cb-105699eb3fbe" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.921736 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f90eeb4-4e9a-4568-8280-357f44085201" containerName="mariadb-account-create-update" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.922362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.925155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.925379 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.926679 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nd2zf" Jan 26 19:16:13 crc kubenswrapper[4787]: I0126 19:16:13.930029 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfhzc"] Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.003919 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-config-data\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.004001 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.004261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-scripts\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.004338 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6khw\" (UniqueName: \"kubernetes.io/projected/4c23b6c5-9693-44f7-afcb-826d1c2370df-kube-api-access-r6khw\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.106333 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6khw\" (UniqueName: \"kubernetes.io/projected/4c23b6c5-9693-44f7-afcb-826d1c2370df-kube-api-access-r6khw\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.106463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-config-data\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.106513 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.107658 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-scripts\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.113325 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-scripts\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.113720 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.114606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-config-data\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.131124 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6khw\" (UniqueName: \"kubernetes.io/projected/4c23b6c5-9693-44f7-afcb-826d1c2370df-kube-api-access-r6khw\") pod \"nova-cell0-conductor-db-sync-rfhzc\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.241757 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:14 crc kubenswrapper[4787]: I0126 19:16:14.674629 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfhzc"] Jan 26 19:16:14 crc kubenswrapper[4787]: W0126 19:16:14.693125 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c23b6c5_9693_44f7_afcb_826d1c2370df.slice/crio-864b9ee2a933bfde890e6fc16068b253ccf99f4a358d4b8322827416f8b1a553 WatchSource:0}: Error finding container 864b9ee2a933bfde890e6fc16068b253ccf99f4a358d4b8322827416f8b1a553: Status 404 returned error can't find the container with id 864b9ee2a933bfde890e6fc16068b253ccf99f4a358d4b8322827416f8b1a553 Jan 26 19:16:15 crc kubenswrapper[4787]: I0126 19:16:15.632681 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" event={"ID":"4c23b6c5-9693-44f7-afcb-826d1c2370df","Type":"ContainerStarted","Data":"2e07edadc2af0d72bfa28d5e9aa03808f0c099f6e9efd32d89e899cb8709ad8d"} Jan 26 19:16:15 crc kubenswrapper[4787]: I0126 19:16:15.633014 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" event={"ID":"4c23b6c5-9693-44f7-afcb-826d1c2370df","Type":"ContainerStarted","Data":"864b9ee2a933bfde890e6fc16068b253ccf99f4a358d4b8322827416f8b1a553"} Jan 26 19:16:15 crc kubenswrapper[4787]: I0126 19:16:15.654342 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" podStartSLOduration=2.654321786 podStartE2EDuration="2.654321786s" podCreationTimestamp="2026-01-26 19:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:15.650176075 +0000 UTC m=+5544.357312218" watchObservedRunningTime="2026-01-26 19:16:15.654321786 +0000 UTC m=+5544.361457919" Jan 26 19:16:20 crc kubenswrapper[4787]: I0126 19:16:20.669271 4787 generic.go:334] "Generic (PLEG): container finished" podID="4c23b6c5-9693-44f7-afcb-826d1c2370df" containerID="2e07edadc2af0d72bfa28d5e9aa03808f0c099f6e9efd32d89e899cb8709ad8d" exitCode=0 Jan 26 19:16:20 crc kubenswrapper[4787]: I0126 19:16:20.669358 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" event={"ID":"4c23b6c5-9693-44f7-afcb-826d1c2370df","Type":"ContainerDied","Data":"2e07edadc2af0d72bfa28d5e9aa03808f0c099f6e9efd32d89e899cb8709ad8d"} Jan 26 19:16:21 crc kubenswrapper[4787]: I0126 19:16:21.600272 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:16:21 crc kubenswrapper[4787]: E0126 19:16:21.600840 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.021344 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.149929 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-combined-ca-bundle\") pod \"4c23b6c5-9693-44f7-afcb-826d1c2370df\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.150515 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6khw\" (UniqueName: \"kubernetes.io/projected/4c23b6c5-9693-44f7-afcb-826d1c2370df-kube-api-access-r6khw\") pod \"4c23b6c5-9693-44f7-afcb-826d1c2370df\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.150642 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-scripts\") pod \"4c23b6c5-9693-44f7-afcb-826d1c2370df\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.150827 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-config-data\") pod \"4c23b6c5-9693-44f7-afcb-826d1c2370df\" (UID: \"4c23b6c5-9693-44f7-afcb-826d1c2370df\") " Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.155857 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c23b6c5-9693-44f7-afcb-826d1c2370df-kube-api-access-r6khw" (OuterVolumeSpecName: "kube-api-access-r6khw") pod "4c23b6c5-9693-44f7-afcb-826d1c2370df" (UID: "4c23b6c5-9693-44f7-afcb-826d1c2370df"). InnerVolumeSpecName "kube-api-access-r6khw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.169358 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-scripts" (OuterVolumeSpecName: "scripts") pod "4c23b6c5-9693-44f7-afcb-826d1c2370df" (UID: "4c23b6c5-9693-44f7-afcb-826d1c2370df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.175829 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c23b6c5-9693-44f7-afcb-826d1c2370df" (UID: "4c23b6c5-9693-44f7-afcb-826d1c2370df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.178273 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-config-data" (OuterVolumeSpecName: "config-data") pod "4c23b6c5-9693-44f7-afcb-826d1c2370df" (UID: "4c23b6c5-9693-44f7-afcb-826d1c2370df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.253330 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.253368 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.253384 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6khw\" (UniqueName: \"kubernetes.io/projected/4c23b6c5-9693-44f7-afcb-826d1c2370df-kube-api-access-r6khw\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.253396 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c23b6c5-9693-44f7-afcb-826d1c2370df-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.688865 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" event={"ID":"4c23b6c5-9693-44f7-afcb-826d1c2370df","Type":"ContainerDied","Data":"864b9ee2a933bfde890e6fc16068b253ccf99f4a358d4b8322827416f8b1a553"} Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.688902 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="864b9ee2a933bfde890e6fc16068b253ccf99f4a358d4b8322827416f8b1a553" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.688987 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfhzc" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.776213 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:16:22 crc kubenswrapper[4787]: E0126 19:16:22.776719 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c23b6c5-9693-44f7-afcb-826d1c2370df" containerName="nova-cell0-conductor-db-sync" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.776742 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c23b6c5-9693-44f7-afcb-826d1c2370df" containerName="nova-cell0-conductor-db-sync" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.777006 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c23b6c5-9693-44f7-afcb-826d1c2370df" containerName="nova-cell0-conductor-db-sync" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.777770 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.780115 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.787828 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nd2zf" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.792774 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.863134 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkkvg\" (UniqueName: \"kubernetes.io/projected/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-kube-api-access-qkkvg\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.863195 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.863596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.965258 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.965343 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkvg\" (UniqueName: \"kubernetes.io/projected/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-kube-api-access-qkkvg\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.965385 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.970192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.970982 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:22 crc kubenswrapper[4787]: I0126 19:16:22.986292 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkvg\" (UniqueName: \"kubernetes.io/projected/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-kube-api-access-qkkvg\") pod \"nova-cell0-conductor-0\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:23 crc kubenswrapper[4787]: I0126 19:16:23.096425 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:23 crc kubenswrapper[4787]: I0126 19:16:23.544089 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:16:23 crc kubenswrapper[4787]: I0126 19:16:23.710750 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd","Type":"ContainerStarted","Data":"e9697129d75e6a7f0243098146a754fea808030efe2f38f3a869fc6bb982910a"} Jan 26 19:16:25 crc kubenswrapper[4787]: I0126 19:16:25.726645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd","Type":"ContainerStarted","Data":"c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6"} Jan 26 19:16:26 crc kubenswrapper[4787]: I0126 19:16:26.737991 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:26 crc kubenswrapper[4787]: I0126 19:16:26.763238 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.763214332 podStartE2EDuration="4.763214332s" podCreationTimestamp="2026-01-26 19:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:26.756200821 +0000 UTC m=+5555.463336954" watchObservedRunningTime="2026-01-26 19:16:26.763214332 +0000 UTC m=+5555.470350465" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.122994 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.759960 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dkn5r"] Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.770539 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.776155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.776398 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.789201 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dkn5r"] Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.856317 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-config-data\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.856775 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-scripts\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.856905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddch5\" (UniqueName: \"kubernetes.io/projected/593f2f21-8ec9-4416-bc74-578add751f8f-kube-api-access-ddch5\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.857077 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.882538 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.884675 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.890077 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.908097 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.959700 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddch5\" (UniqueName: \"kubernetes.io/projected/593f2f21-8ec9-4416-bc74-578add751f8f-kube-api-access-ddch5\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.959783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.959812 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-config-data\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.959874 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-config-data\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.959924 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.961497 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e38b644-227d-4010-b437-b39c8ebfeae1-logs\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.961536 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-scripts\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.961568 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvlb\" (UniqueName: \"kubernetes.io/projected/5e38b644-227d-4010-b437-b39c8ebfeae1-kube-api-access-pdvlb\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.963938 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.967417 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.968927 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.969262 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-config-data\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.972430 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.985463 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:16:33 crc kubenswrapper[4787]: I0126 19:16:33.985938 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-scripts\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.011285 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddch5\" (UniqueName: \"kubernetes.io/projected/593f2f21-8ec9-4416-bc74-578add751f8f-kube-api-access-ddch5\") pod \"nova-cell0-cell-mapping-dkn5r\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.015187 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.016923 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.022096 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.049167 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q67l\" (UniqueName: \"kubernetes.io/projected/3ffb3110-fda7-4a41-ad07-45fa242c6493-kube-api-access-6q67l\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-config-data\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-config-data\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066540 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066643 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjhv\" (UniqueName: \"kubernetes.io/projected/20760e60-b469-4023-aacf-0fa14629a665-kube-api-access-dsjhv\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066676 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e38b644-227d-4010-b437-b39c8ebfeae1-logs\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.066715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvlb\" (UniqueName: \"kubernetes.io/projected/5e38b644-227d-4010-b437-b39c8ebfeae1-kube-api-access-pdvlb\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.068213 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e38b644-227d-4010-b437-b39c8ebfeae1-logs\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.069625 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.072405 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.072810 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.073282 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-config-data\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.074889 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.101224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvlb\" (UniqueName: \"kubernetes.io/projected/5e38b644-227d-4010-b437-b39c8ebfeae1-kube-api-access-pdvlb\") pod \"nova-api-0\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.101694 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.131709 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.156902 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbb8b55cf-7dfbl"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168139 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-config-data\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168197 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-config-data\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168270 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168306 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcff\" (UniqueName: \"kubernetes.io/projected/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-kube-api-access-2tcff\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168358 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsjhv\" (UniqueName: \"kubernetes.io/projected/20760e60-b469-4023-aacf-0fa14629a665-kube-api-access-dsjhv\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.168491 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q67l\" (UniqueName: \"kubernetes.io/projected/3ffb3110-fda7-4a41-ad07-45fa242c6493-kube-api-access-6q67l\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.169086 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.173583 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-logs\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.183241 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.183833 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-config-data\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.187934 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.196655 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.202444 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsjhv\" (UniqueName: \"kubernetes.io/projected/20760e60-b469-4023-aacf-0fa14629a665-kube-api-access-dsjhv\") pod \"nova-cell1-novncproxy-0\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.203008 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbb8b55cf-7dfbl"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.205667 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q67l\" (UniqueName: \"kubernetes.io/projected/3ffb3110-fda7-4a41-ad07-45fa242c6493-kube-api-access-6q67l\") pod \"nova-scheduler-0\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.206156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.252470 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.274931 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-dns-svc\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275224 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-sb\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275291 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-logs\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275360 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-config-data\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275380 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-config\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275440 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2db2z\" (UniqueName: \"kubernetes.io/projected/bda1037c-3b61-4045-aa57-6cefda2462bb-kube-api-access-2db2z\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275547 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tcff\" (UniqueName: \"kubernetes.io/projected/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-kube-api-access-2tcff\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.275703 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-nb\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.277430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-logs\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.280701 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-config-data\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.291613 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.293846 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tcff\" (UniqueName: \"kubernetes.io/projected/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-kube-api-access-2tcff\") pod \"nova-metadata-0\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.378752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-dns-svc\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.378825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-sb\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.378907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-config\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.378942 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2db2z\" (UniqueName: \"kubernetes.io/projected/bda1037c-3b61-4045-aa57-6cefda2462bb-kube-api-access-2db2z\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.379101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-nb\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.382131 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-sb\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.383537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-config\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.383987 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-dns-svc\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.383994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-nb\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.403263 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2db2z\" (UniqueName: \"kubernetes.io/projected/bda1037c-3b61-4045-aa57-6cefda2462bb-kube-api-access-2db2z\") pod \"dnsmasq-dns-bbb8b55cf-7dfbl\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.479064 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.574605 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.591960 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.708315 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dkn5r"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.732973 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d25hh"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.734633 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.737609 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.737979 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.763999 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d25hh"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.787064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvnx\" (UniqueName: \"kubernetes.io/projected/b10d393a-1d4b-428f-b2e8-aecb524c0f36-kube-api-access-ctvnx\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.787163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-scripts\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.787191 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.787260 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-config-data\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.843550 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dkn5r" event={"ID":"593f2f21-8ec9-4416-bc74-578add751f8f","Type":"ContainerStarted","Data":"99c00b23fdb2394921c6c79c255db1fd54d7bad72da8c08bbbcbb355ad4d984d"} Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.843713 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:34 crc kubenswrapper[4787]: W0126 19:16:34.860116 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e38b644_227d_4010_b437_b39c8ebfeae1.slice/crio-446464bae21888b98bc9a51dd303a39b5eb407893c6f6cb432c4efa582887f9d WatchSource:0}: Error finding container 446464bae21888b98bc9a51dd303a39b5eb407893c6f6cb432c4efa582887f9d: Status 404 returned error can't find the container with id 446464bae21888b98bc9a51dd303a39b5eb407893c6f6cb432c4efa582887f9d Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.891645 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-scripts\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.891723 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.891843 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-config-data\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.891961 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctvnx\" (UniqueName: \"kubernetes.io/projected/b10d393a-1d4b-428f-b2e8-aecb524c0f36-kube-api-access-ctvnx\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.901173 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.908516 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-scripts\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.911740 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.912568 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-config-data\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:34 crc kubenswrapper[4787]: W0126 19:16:34.913510 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ffb3110_fda7_4a41_ad07_45fa242c6493.slice/crio-bce0472afb1ddbf676c009f58ea9ae897ff5eff7ac8dabc9cf139c795188b535 WatchSource:0}: Error finding container bce0472afb1ddbf676c009f58ea9ae897ff5eff7ac8dabc9cf139c795188b535: Status 404 returned error can't find the container with id bce0472afb1ddbf676c009f58ea9ae897ff5eff7ac8dabc9cf139c795188b535 Jan 26 19:16:34 crc kubenswrapper[4787]: I0126 19:16:34.919460 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctvnx\" (UniqueName: \"kubernetes.io/projected/b10d393a-1d4b-428f-b2e8-aecb524c0f36-kube-api-access-ctvnx\") pod \"nova-cell1-conductor-db-sync-d25hh\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.052690 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:16:35 crc kubenswrapper[4787]: W0126 19:16:35.058138 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20760e60_b469_4023_aacf_0fa14629a665.slice/crio-a54ec87e4c604cee19f1790ea2a4d0ae04494bd3c8474540eb8312b46ab05049 WatchSource:0}: Error finding container a54ec87e4c604cee19f1790ea2a4d0ae04494bd3c8474540eb8312b46ab05049: Status 404 returned error can't find the container with id a54ec87e4c604cee19f1790ea2a4d0ae04494bd3c8474540eb8312b46ab05049 Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.072892 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.219104 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.348305 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbb8b55cf-7dfbl"] Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.546997 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d25hh"] Jan 26 19:16:35 crc kubenswrapper[4787]: W0126 19:16:35.559233 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb10d393a_1d4b_428f_b2e8_aecb524c0f36.slice/crio-67154433293a461774c337df5f7deda12a20958e1075b99792aa3e4c585da8ac WatchSource:0}: Error finding container 67154433293a461774c337df5f7deda12a20958e1075b99792aa3e4c585da8ac: Status 404 returned error can't find the container with id 67154433293a461774c337df5f7deda12a20958e1075b99792aa3e4c585da8ac Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.590676 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:16:35 crc kubenswrapper[4787]: E0126 19:16:35.591069 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.856058 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e38b644-227d-4010-b437-b39c8ebfeae1","Type":"ContainerStarted","Data":"931c101229180a9004d21aa9778c545d45a5bf8436c9735f5e5a96a8d6929a96"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.856112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e38b644-227d-4010-b437-b39c8ebfeae1","Type":"ContainerStarted","Data":"8fadf9e2cf40ffea82a5413383e91bd486eb22a568179f954580ca613b69e101"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.856127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e38b644-227d-4010-b437-b39c8ebfeae1","Type":"ContainerStarted","Data":"446464bae21888b98bc9a51dd303a39b5eb407893c6f6cb432c4efa582887f9d"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.859029 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20760e60-b469-4023-aacf-0fa14629a665","Type":"ContainerStarted","Data":"6bb42ebd049344913b8cd887408581aa24cf85099614f8e839f321047fa210e0"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.859070 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20760e60-b469-4023-aacf-0fa14629a665","Type":"ContainerStarted","Data":"a54ec87e4c604cee19f1790ea2a4d0ae04494bd3c8474540eb8312b46ab05049"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.861469 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ffb3110-fda7-4a41-ad07-45fa242c6493","Type":"ContainerStarted","Data":"19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.861502 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ffb3110-fda7-4a41-ad07-45fa242c6493","Type":"ContainerStarted","Data":"bce0472afb1ddbf676c009f58ea9ae897ff5eff7ac8dabc9cf139c795188b535"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.863350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dkn5r" event={"ID":"593f2f21-8ec9-4416-bc74-578add751f8f","Type":"ContainerStarted","Data":"ed2d64c16444e50bc17143c948623d6da5ffc32229caec76cc077fe5e3be39ea"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.865633 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07","Type":"ContainerStarted","Data":"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.865674 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07","Type":"ContainerStarted","Data":"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.865689 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07","Type":"ContainerStarted","Data":"f723f1772206baa587eccdc916139ba05b00b01e8fd2b639af03f97b8c08c039"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.867974 4787 generic.go:334] "Generic (PLEG): container finished" podID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerID="4175050a375c3b89f70bded51b149c1650d7e62347cfbee773212fb038f77736" exitCode=0 Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.868038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" event={"ID":"bda1037c-3b61-4045-aa57-6cefda2462bb","Type":"ContainerDied","Data":"4175050a375c3b89f70bded51b149c1650d7e62347cfbee773212fb038f77736"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.868060 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" event={"ID":"bda1037c-3b61-4045-aa57-6cefda2462bb","Type":"ContainerStarted","Data":"32eacedf29af11a87198e9159528e5b2453996bfecf70727a7864c610720cc73"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.871421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d25hh" event={"ID":"b10d393a-1d4b-428f-b2e8-aecb524c0f36","Type":"ContainerStarted","Data":"af3f82f5d26fec1d59457d2f442b5aa58197e4b962011cf7f0a18f879befa4c6"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.871493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d25hh" event={"ID":"b10d393a-1d4b-428f-b2e8-aecb524c0f36","Type":"ContainerStarted","Data":"67154433293a461774c337df5f7deda12a20958e1075b99792aa3e4c585da8ac"} Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.881091 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8810723510000003 podStartE2EDuration="2.881072351s" podCreationTimestamp="2026-01-26 19:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:35.879840601 +0000 UTC m=+5564.586976734" watchObservedRunningTime="2026-01-26 19:16:35.881072351 +0000 UTC m=+5564.588208484" Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.947862 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.947840464 podStartE2EDuration="1.947840464s" podCreationTimestamp="2026-01-26 19:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:35.934686104 +0000 UTC m=+5564.641822257" watchObservedRunningTime="2026-01-26 19:16:35.947840464 +0000 UTC m=+5564.654976617" Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.967579 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9675575629999997 podStartE2EDuration="2.967557563s" podCreationTimestamp="2026-01-26 19:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:35.964869208 +0000 UTC m=+5564.672005341" watchObservedRunningTime="2026-01-26 19:16:35.967557563 +0000 UTC m=+5564.674693696" Jan 26 19:16:35 crc kubenswrapper[4787]: I0126 19:16:35.986635 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-d25hh" podStartSLOduration=1.9866190860000001 podStartE2EDuration="1.986619086s" podCreationTimestamp="2026-01-26 19:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:35.984426753 +0000 UTC m=+5564.691562886" watchObservedRunningTime="2026-01-26 19:16:35.986619086 +0000 UTC m=+5564.693755219" Jan 26 19:16:36 crc kubenswrapper[4787]: I0126 19:16:36.021106 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.021082754 podStartE2EDuration="3.021082754s" podCreationTimestamp="2026-01-26 19:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:36.005485125 +0000 UTC m=+5564.712621258" watchObservedRunningTime="2026-01-26 19:16:36.021082754 +0000 UTC m=+5564.728218887" Jan 26 19:16:36 crc kubenswrapper[4787]: I0126 19:16:36.036150 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dkn5r" podStartSLOduration=3.03612792 podStartE2EDuration="3.03612792s" podCreationTimestamp="2026-01-26 19:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:36.032275206 +0000 UTC m=+5564.739411339" watchObservedRunningTime="2026-01-26 19:16:36.03612792 +0000 UTC m=+5564.743264063" Jan 26 19:16:36 crc kubenswrapper[4787]: I0126 19:16:36.883515 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" event={"ID":"bda1037c-3b61-4045-aa57-6cefda2462bb","Type":"ContainerStarted","Data":"ab4eefbd23c4e8d3b1101749ece18d520aedc9d3a07c0cb08cb15ef38468a5a7"} Jan 26 19:16:36 crc kubenswrapper[4787]: I0126 19:16:36.906063 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" podStartSLOduration=2.9060377539999998 podStartE2EDuration="2.906037754s" podCreationTimestamp="2026-01-26 19:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:36.899832843 +0000 UTC m=+5565.606968976" watchObservedRunningTime="2026-01-26 19:16:36.906037754 +0000 UTC m=+5565.613173887" Jan 26 19:16:37 crc kubenswrapper[4787]: I0126 19:16:37.890190 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.253435 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.479534 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.575995 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.576049 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.914612 4787 generic.go:334] "Generic (PLEG): container finished" podID="b10d393a-1d4b-428f-b2e8-aecb524c0f36" containerID="af3f82f5d26fec1d59457d2f442b5aa58197e4b962011cf7f0a18f879befa4c6" exitCode=0 Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.914681 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d25hh" event={"ID":"b10d393a-1d4b-428f-b2e8-aecb524c0f36","Type":"ContainerDied","Data":"af3f82f5d26fec1d59457d2f442b5aa58197e4b962011cf7f0a18f879befa4c6"} Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.916133 4787 generic.go:334] "Generic (PLEG): container finished" podID="593f2f21-8ec9-4416-bc74-578add751f8f" containerID="ed2d64c16444e50bc17143c948623d6da5ffc32229caec76cc077fe5e3be39ea" exitCode=0 Jan 26 19:16:39 crc kubenswrapper[4787]: I0126 19:16:39.916161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dkn5r" event={"ID":"593f2f21-8ec9-4416-bc74-578add751f8f","Type":"ContainerDied","Data":"ed2d64c16444e50bc17143c948623d6da5ffc32229caec76cc077fe5e3be39ea"} Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.381333 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.388208 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426029 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-scripts\") pod \"593f2f21-8ec9-4416-bc74-578add751f8f\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426117 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-scripts\") pod \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426214 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctvnx\" (UniqueName: \"kubernetes.io/projected/b10d393a-1d4b-428f-b2e8-aecb524c0f36-kube-api-access-ctvnx\") pod \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426249 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-config-data\") pod \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426311 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddch5\" (UniqueName: \"kubernetes.io/projected/593f2f21-8ec9-4416-bc74-578add751f8f-kube-api-access-ddch5\") pod \"593f2f21-8ec9-4416-bc74-578add751f8f\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426343 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-combined-ca-bundle\") pod \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\" (UID: \"b10d393a-1d4b-428f-b2e8-aecb524c0f36\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426373 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-combined-ca-bundle\") pod \"593f2f21-8ec9-4416-bc74-578add751f8f\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.426412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-config-data\") pod \"593f2f21-8ec9-4416-bc74-578add751f8f\" (UID: \"593f2f21-8ec9-4416-bc74-578add751f8f\") " Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.433112 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-scripts" (OuterVolumeSpecName: "scripts") pod "593f2f21-8ec9-4416-bc74-578add751f8f" (UID: "593f2f21-8ec9-4416-bc74-578add751f8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.443183 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10d393a-1d4b-428f-b2e8-aecb524c0f36-kube-api-access-ctvnx" (OuterVolumeSpecName: "kube-api-access-ctvnx") pod "b10d393a-1d4b-428f-b2e8-aecb524c0f36" (UID: "b10d393a-1d4b-428f-b2e8-aecb524c0f36"). InnerVolumeSpecName "kube-api-access-ctvnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.445215 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593f2f21-8ec9-4416-bc74-578add751f8f-kube-api-access-ddch5" (OuterVolumeSpecName: "kube-api-access-ddch5") pod "593f2f21-8ec9-4416-bc74-578add751f8f" (UID: "593f2f21-8ec9-4416-bc74-578add751f8f"). InnerVolumeSpecName "kube-api-access-ddch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.454863 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-scripts" (OuterVolumeSpecName: "scripts") pod "b10d393a-1d4b-428f-b2e8-aecb524c0f36" (UID: "b10d393a-1d4b-428f-b2e8-aecb524c0f36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.464978 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-config-data" (OuterVolumeSpecName: "config-data") pod "593f2f21-8ec9-4416-bc74-578add751f8f" (UID: "593f2f21-8ec9-4416-bc74-578add751f8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.468582 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b10d393a-1d4b-428f-b2e8-aecb524c0f36" (UID: "b10d393a-1d4b-428f-b2e8-aecb524c0f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.468995 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593f2f21-8ec9-4416-bc74-578add751f8f" (UID: "593f2f21-8ec9-4416-bc74-578add751f8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.469876 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-config-data" (OuterVolumeSpecName: "config-data") pod "b10d393a-1d4b-428f-b2e8-aecb524c0f36" (UID: "b10d393a-1d4b-428f-b2e8-aecb524c0f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529049 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529086 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529097 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctvnx\" (UniqueName: \"kubernetes.io/projected/b10d393a-1d4b-428f-b2e8-aecb524c0f36-kube-api-access-ctvnx\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529110 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529120 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddch5\" (UniqueName: \"kubernetes.io/projected/593f2f21-8ec9-4416-bc74-578add751f8f-kube-api-access-ddch5\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529128 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d393a-1d4b-428f-b2e8-aecb524c0f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529137 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.529147 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f2f21-8ec9-4416-bc74-578add751f8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.934786 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dkn5r" event={"ID":"593f2f21-8ec9-4416-bc74-578add751f8f","Type":"ContainerDied","Data":"99c00b23fdb2394921c6c79c255db1fd54d7bad72da8c08bbbcbb355ad4d984d"} Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.934829 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c00b23fdb2394921c6c79c255db1fd54d7bad72da8c08bbbcbb355ad4d984d" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.934849 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dkn5r" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.937035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d25hh" event={"ID":"b10d393a-1d4b-428f-b2e8-aecb524c0f36","Type":"ContainerDied","Data":"67154433293a461774c337df5f7deda12a20958e1075b99792aa3e4c585da8ac"} Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.937087 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67154433293a461774c337df5f7deda12a20958e1075b99792aa3e4c585da8ac" Jan 26 19:16:41 crc kubenswrapper[4787]: I0126 19:16:41.937134 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d25hh" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.009734 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:16:42 crc kubenswrapper[4787]: E0126 19:16:42.010139 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593f2f21-8ec9-4416-bc74-578add751f8f" containerName="nova-manage" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.010157 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="593f2f21-8ec9-4416-bc74-578add751f8f" containerName="nova-manage" Jan 26 19:16:42 crc kubenswrapper[4787]: E0126 19:16:42.010168 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10d393a-1d4b-428f-b2e8-aecb524c0f36" containerName="nova-cell1-conductor-db-sync" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.010175 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10d393a-1d4b-428f-b2e8-aecb524c0f36" containerName="nova-cell1-conductor-db-sync" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.010318 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="593f2f21-8ec9-4416-bc74-578add751f8f" containerName="nova-manage" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.010338 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10d393a-1d4b-428f-b2e8-aecb524c0f36" containerName="nova-cell1-conductor-db-sync" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.010928 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.013663 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.025529 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.037271 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2r7w\" (UniqueName: \"kubernetes.io/projected/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-kube-api-access-c2r7w\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.037403 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.037505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.139215 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.139323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.139390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2r7w\" (UniqueName: \"kubernetes.io/projected/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-kube-api-access-c2r7w\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.144833 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.145572 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.160360 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2r7w\" (UniqueName: \"kubernetes.io/projected/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-kube-api-access-c2r7w\") pod \"nova-cell1-conductor-0\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.169268 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.177794 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-log" containerID="cri-o://8fadf9e2cf40ffea82a5413383e91bd486eb22a568179f954580ca613b69e101" gracePeriod=30 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.178122 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-api" containerID="cri-o://931c101229180a9004d21aa9778c545d45a5bf8436c9735f5e5a96a8d6929a96" gracePeriod=30 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.220453 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.220657 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3ffb3110-fda7-4a41-ad07-45fa242c6493" containerName="nova-scheduler-scheduler" containerID="cri-o://19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566" gracePeriod=30 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.310317 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.310743 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-metadata" containerID="cri-o://e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a" gracePeriod=30 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.310993 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-log" containerID="cri-o://1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb" gracePeriod=30 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.330670 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.818202 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.862011 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.952298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-config-data\") pod \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.952374 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-combined-ca-bundle\") pod \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.952432 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tcff\" (UniqueName: \"kubernetes.io/projected/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-kube-api-access-2tcff\") pod \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.952542 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-logs\") pod \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\" (UID: \"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07\") " Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.953034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-logs" (OuterVolumeSpecName: "logs") pod "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" (UID: "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.953445 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.954092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e43bc4e3-4624-407d-9d1f-7092e9bd70fe","Type":"ContainerStarted","Data":"e87b695bd5ddecee04c016f22c72e26d87790577934c03efbd13a146102d06fe"} Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.955812 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-kube-api-access-2tcff" (OuterVolumeSpecName: "kube-api-access-2tcff") pod "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" (UID: "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07"). InnerVolumeSpecName "kube-api-access-2tcff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971505 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerID="e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a" exitCode=0 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971552 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerID="1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb" exitCode=143 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07","Type":"ContainerDied","Data":"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a"} Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07","Type":"ContainerDied","Data":"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb"} Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971686 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07","Type":"ContainerDied","Data":"f723f1772206baa587eccdc916139ba05b00b01e8fd2b639af03f97b8c08c039"} Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971704 4787 scope.go:117] "RemoveContainer" containerID="e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.971882 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.976834 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerID="931c101229180a9004d21aa9778c545d45a5bf8436c9735f5e5a96a8d6929a96" exitCode=0 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.976881 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerID="8fadf9e2cf40ffea82a5413383e91bd486eb22a568179f954580ca613b69e101" exitCode=143 Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.976906 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e38b644-227d-4010-b437-b39c8ebfeae1","Type":"ContainerDied","Data":"931c101229180a9004d21aa9778c545d45a5bf8436c9735f5e5a96a8d6929a96"} Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.976932 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e38b644-227d-4010-b437-b39c8ebfeae1","Type":"ContainerDied","Data":"8fadf9e2cf40ffea82a5413383e91bd486eb22a568179f954580ca613b69e101"} Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.985872 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-config-data" (OuterVolumeSpecName: "config-data") pod "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" (UID: "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:42 crc kubenswrapper[4787]: I0126 19:16:42.994128 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" (UID: "bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.007696 4787 scope.go:117] "RemoveContainer" containerID="1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.034486 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.036277 4787 scope.go:117] "RemoveContainer" containerID="e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a" Jan 26 19:16:43 crc kubenswrapper[4787]: E0126 19:16:43.036672 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a\": container with ID starting with e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a not found: ID does not exist" containerID="e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.036704 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a"} err="failed to get container status \"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a\": rpc error: code = NotFound desc = could not find container \"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a\": container with ID starting with e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a not found: ID does not exist" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.036733 4787 scope.go:117] "RemoveContainer" containerID="1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb" Jan 26 19:16:43 crc kubenswrapper[4787]: E0126 19:16:43.039012 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb\": container with ID starting with 1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb not found: ID does not exist" containerID="1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.039071 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb"} err="failed to get container status \"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb\": rpc error: code = NotFound desc = could not find container \"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb\": container with ID starting with 1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb not found: ID does not exist" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.039111 4787 scope.go:117] "RemoveContainer" containerID="e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.039435 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a"} err="failed to get container status \"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a\": rpc error: code = NotFound desc = could not find container \"e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a\": container with ID starting with e36d6968e5ff34d523c556e25c7073dbf1acd7e2092cf04b42f5fbbc63a8377a not found: ID does not exist" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.039465 4787 scope.go:117] "RemoveContainer" containerID="1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.040023 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb"} err="failed to get container status \"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb\": rpc error: code = NotFound desc = could not find container \"1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb\": container with ID starting with 1ce0f0ed54def51e9cc8f6bd94eb0cf64a8bae55effc701ceac8c36ee28a48cb not found: ID does not exist" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.057587 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvlb\" (UniqueName: \"kubernetes.io/projected/5e38b644-227d-4010-b437-b39c8ebfeae1-kube-api-access-pdvlb\") pod \"5e38b644-227d-4010-b437-b39c8ebfeae1\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.057729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e38b644-227d-4010-b437-b39c8ebfeae1-logs\") pod \"5e38b644-227d-4010-b437-b39c8ebfeae1\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.057804 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-config-data\") pod \"5e38b644-227d-4010-b437-b39c8ebfeae1\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.057901 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-combined-ca-bundle\") pod \"5e38b644-227d-4010-b437-b39c8ebfeae1\" (UID: \"5e38b644-227d-4010-b437-b39c8ebfeae1\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.058304 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e38b644-227d-4010-b437-b39c8ebfeae1-logs" (OuterVolumeSpecName: "logs") pod "5e38b644-227d-4010-b437-b39c8ebfeae1" (UID: "5e38b644-227d-4010-b437-b39c8ebfeae1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.058371 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.058388 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.058404 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tcff\" (UniqueName: \"kubernetes.io/projected/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07-kube-api-access-2tcff\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.062592 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e38b644-227d-4010-b437-b39c8ebfeae1-kube-api-access-pdvlb" (OuterVolumeSpecName: "kube-api-access-pdvlb") pod "5e38b644-227d-4010-b437-b39c8ebfeae1" (UID: "5e38b644-227d-4010-b437-b39c8ebfeae1"). InnerVolumeSpecName "kube-api-access-pdvlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.088311 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-config-data" (OuterVolumeSpecName: "config-data") pod "5e38b644-227d-4010-b437-b39c8ebfeae1" (UID: "5e38b644-227d-4010-b437-b39c8ebfeae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.101229 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e38b644-227d-4010-b437-b39c8ebfeae1" (UID: "5e38b644-227d-4010-b437-b39c8ebfeae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.160572 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.160676 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e38b644-227d-4010-b437-b39c8ebfeae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.160693 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvlb\" (UniqueName: \"kubernetes.io/projected/5e38b644-227d-4010-b437-b39c8ebfeae1-kube-api-access-pdvlb\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.160709 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e38b644-227d-4010-b437-b39c8ebfeae1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.370317 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.382554 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.395129 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:43 crc kubenswrapper[4787]: E0126 19:16:43.395658 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-log" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.395683 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-log" Jan 26 19:16:43 crc kubenswrapper[4787]: E0126 19:16:43.395707 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-api" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.395714 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-api" Jan 26 19:16:43 crc kubenswrapper[4787]: E0126 19:16:43.395726 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-log" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.395735 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-log" Jan 26 19:16:43 crc kubenswrapper[4787]: E0126 19:16:43.395753 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-metadata" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.395759 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-metadata" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.395999 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-api" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.396026 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-metadata" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.396053 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" containerName="nova-api-log" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.396072 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" containerName="nova-metadata-log" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.397376 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.400330 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.424064 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.473214 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.473311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpq52\" (UniqueName: \"kubernetes.io/projected/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-kube-api-access-dpq52\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.473460 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-config-data\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.473761 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-logs\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.576333 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.576397 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpq52\" (UniqueName: \"kubernetes.io/projected/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-kube-api-access-dpq52\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.576480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-config-data\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.576542 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-logs\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.577441 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-logs\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.582581 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.594274 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-config-data\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.600159 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpq52\" (UniqueName: \"kubernetes.io/projected/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-kube-api-access-dpq52\") pod \"nova-metadata-0\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.610411 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07" path="/var/lib/kubelet/pods/bb8d19c0-612a-4eb8-bc19-6b94a6a5bf07/volumes" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.789437 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.813516 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.983574 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q67l\" (UniqueName: \"kubernetes.io/projected/3ffb3110-fda7-4a41-ad07-45fa242c6493-kube-api-access-6q67l\") pod \"3ffb3110-fda7-4a41-ad07-45fa242c6493\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.983622 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-combined-ca-bundle\") pod \"3ffb3110-fda7-4a41-ad07-45fa242c6493\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.983667 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-config-data\") pod \"3ffb3110-fda7-4a41-ad07-45fa242c6493\" (UID: \"3ffb3110-fda7-4a41-ad07-45fa242c6493\") " Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.990765 4787 generic.go:334] "Generic (PLEG): container finished" podID="3ffb3110-fda7-4a41-ad07-45fa242c6493" containerID="19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566" exitCode=0 Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.990879 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ffb3110-fda7-4a41-ad07-45fa242c6493","Type":"ContainerDied","Data":"19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566"} Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.990919 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ffb3110-fda7-4a41-ad07-45fa242c6493","Type":"ContainerDied","Data":"bce0472afb1ddbf676c009f58ea9ae897ff5eff7ac8dabc9cf139c795188b535"} Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.990937 4787 scope.go:117] "RemoveContainer" containerID="19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.991099 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:16:43 crc kubenswrapper[4787]: I0126 19:16:43.995369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e43bc4e3-4624-407d-9d1f-7092e9bd70fe","Type":"ContainerStarted","Data":"d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755"} Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:43.997023 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.000064 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e38b644-227d-4010-b437-b39c8ebfeae1","Type":"ContainerDied","Data":"446464bae21888b98bc9a51dd303a39b5eb407893c6f6cb432c4efa582887f9d"} Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.000181 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.001309 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffb3110-fda7-4a41-ad07-45fa242c6493-kube-api-access-6q67l" (OuterVolumeSpecName: "kube-api-access-6q67l") pod "3ffb3110-fda7-4a41-ad07-45fa242c6493" (UID: "3ffb3110-fda7-4a41-ad07-45fa242c6493"). InnerVolumeSpecName "kube-api-access-6q67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.013331 4787 scope.go:117] "RemoveContainer" containerID="19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566" Jan 26 19:16:44 crc kubenswrapper[4787]: E0126 19:16:44.015328 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566\": container with ID starting with 19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566 not found: ID does not exist" containerID="19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.015372 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566"} err="failed to get container status \"19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566\": rpc error: code = NotFound desc = could not find container \"19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566\": container with ID starting with 19f625bd3713b8983b310ddef323934585a21df584c2f4e453bb9d41dd308566 not found: ID does not exist" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.015413 4787 scope.go:117] "RemoveContainer" containerID="931c101229180a9004d21aa9778c545d45a5bf8436c9735f5e5a96a8d6929a96" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.049181 4787 scope.go:117] "RemoveContainer" containerID="8fadf9e2cf40ffea82a5413383e91bd486eb22a568179f954580ca613b69e101" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.053327 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-config-data" (OuterVolumeSpecName: "config-data") pod "3ffb3110-fda7-4a41-ad07-45fa242c6493" (UID: "3ffb3110-fda7-4a41-ad07-45fa242c6493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.059882 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.059856145 podStartE2EDuration="3.059856145s" podCreationTimestamp="2026-01-26 19:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:44.013641622 +0000 UTC m=+5572.720777755" watchObservedRunningTime="2026-01-26 19:16:44.059856145 +0000 UTC m=+5572.766992278" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.074844 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ffb3110-fda7-4a41-ad07-45fa242c6493" (UID: "3ffb3110-fda7-4a41-ad07-45fa242c6493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.076026 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.093451 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q67l\" (UniqueName: \"kubernetes.io/projected/3ffb3110-fda7-4a41-ad07-45fa242c6493-kube-api-access-6q67l\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.093484 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.093493 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffb3110-fda7-4a41-ad07-45fa242c6493-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.108424 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.115014 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: E0126 19:16:44.115600 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffb3110-fda7-4a41-ad07-45fa242c6493" containerName="nova-scheduler-scheduler" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.115631 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffb3110-fda7-4a41-ad07-45fa242c6493" containerName="nova-scheduler-scheduler" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.115912 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffb3110-fda7-4a41-ad07-45fa242c6493" containerName="nova-scheduler-scheduler" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.117079 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.118928 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.127773 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.296998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.297236 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e3700d-ee4d-44ad-8e26-c14acd5fc167-logs\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.297305 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-config-data\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.297512 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dq4\" (UniqueName: \"kubernetes.io/projected/59e3700d-ee4d-44ad-8e26-c14acd5fc167-kube-api-access-87dq4\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.299086 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.399663 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e3700d-ee4d-44ad-8e26-c14acd5fc167-logs\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.399738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-config-data\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.399804 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dq4\" (UniqueName: \"kubernetes.io/projected/59e3700d-ee4d-44ad-8e26-c14acd5fc167-kube-api-access-87dq4\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.399904 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.400724 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e3700d-ee4d-44ad-8e26-c14acd5fc167-logs\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.404760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.405320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-config-data\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.421860 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dq4\" (UniqueName: \"kubernetes.io/projected/59e3700d-ee4d-44ad-8e26-c14acd5fc167-kube-api-access-87dq4\") pod \"nova-api-0\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.439643 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.480343 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.495228 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.537014 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.560521 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.576045 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.577635 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.579613 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.581316 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.594113 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.665291 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df7578dcc-hqbgd"] Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.665528 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerName="dnsmasq-dns" containerID="cri-o://d7b0151356e040fbe6a6f942a30b1a60c2013ccce193da044311ce28fd14f27e" gracePeriod=10 Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.705151 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdxl\" (UniqueName: \"kubernetes.io/projected/c9f3f185-e54c-4700-8f16-3b2a0b153a28-kube-api-access-4qdxl\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.705410 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.705590 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-config-data\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.807448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.807511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-config-data\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.807620 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdxl\" (UniqueName: \"kubernetes.io/projected/c9f3f185-e54c-4700-8f16-3b2a0b153a28-kube-api-access-4qdxl\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.812026 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.812662 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-config-data\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.827542 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdxl\" (UniqueName: \"kubernetes.io/projected/c9f3f185-e54c-4700-8f16-3b2a0b153a28-kube-api-access-4qdxl\") pod \"nova-scheduler-0\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.910572 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:16:44 crc kubenswrapper[4787]: I0126 19:16:44.924080 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:16:44 crc kubenswrapper[4787]: W0126 19:16:44.927229 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59e3700d_ee4d_44ad_8e26_c14acd5fc167.slice/crio-f41b2d358da4377bd279adf2951862b4c38e40345dcf6d766c5e5ddbca914d0b WatchSource:0}: Error finding container f41b2d358da4377bd279adf2951862b4c38e40345dcf6d766c5e5ddbca914d0b: Status 404 returned error can't find the container with id f41b2d358da4377bd279adf2951862b4c38e40345dcf6d766c5e5ddbca914d0b Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.029349 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa","Type":"ContainerStarted","Data":"4903e12236bfee76dcd750d8aa471c08901b0d2551442e824bfb6c3f871f314c"} Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.029677 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa","Type":"ContainerStarted","Data":"054666a1016352dafea0cff7a9c9819e03ca27f9f0ae9f1b2580f61a4d692fdf"} Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.034669 4787 generic.go:334] "Generic (PLEG): container finished" podID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerID="d7b0151356e040fbe6a6f942a30b1a60c2013ccce193da044311ce28fd14f27e" exitCode=0 Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.034730 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" event={"ID":"5042020b-f4a1-4f94-b15c-d20917ffa7d4","Type":"ContainerDied","Data":"d7b0151356e040fbe6a6f942a30b1a60c2013ccce193da044311ce28fd14f27e"} Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.038457 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e3700d-ee4d-44ad-8e26-c14acd5fc167","Type":"ContainerStarted","Data":"f41b2d358da4377bd279adf2951862b4c38e40345dcf6d766c5e5ddbca914d0b"} Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.049133 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.584956 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.615674 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffb3110-fda7-4a41-ad07-45fa242c6493" path="/var/lib/kubelet/pods/3ffb3110-fda7-4a41-ad07-45fa242c6493/volumes" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.616315 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e38b644-227d-4010-b437-b39c8ebfeae1" path="/var/lib/kubelet/pods/5e38b644-227d-4010-b437-b39c8ebfeae1/volumes" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.784638 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.932517 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-dns-svc\") pod \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.932874 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-config\") pod \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.933045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-sb\") pod \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.933089 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-nb\") pod \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.933161 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xpmb\" (UniqueName: \"kubernetes.io/projected/5042020b-f4a1-4f94-b15c-d20917ffa7d4-kube-api-access-6xpmb\") pod \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\" (UID: \"5042020b-f4a1-4f94-b15c-d20917ffa7d4\") " Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.937625 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5042020b-f4a1-4f94-b15c-d20917ffa7d4-kube-api-access-6xpmb" (OuterVolumeSpecName: "kube-api-access-6xpmb") pod "5042020b-f4a1-4f94-b15c-d20917ffa7d4" (UID: "5042020b-f4a1-4f94-b15c-d20917ffa7d4"). InnerVolumeSpecName "kube-api-access-6xpmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.978254 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5042020b-f4a1-4f94-b15c-d20917ffa7d4" (UID: "5042020b-f4a1-4f94-b15c-d20917ffa7d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.979305 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5042020b-f4a1-4f94-b15c-d20917ffa7d4" (UID: "5042020b-f4a1-4f94-b15c-d20917ffa7d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.984712 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5042020b-f4a1-4f94-b15c-d20917ffa7d4" (UID: "5042020b-f4a1-4f94-b15c-d20917ffa7d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:45 crc kubenswrapper[4787]: I0126 19:16:45.987613 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-config" (OuterVolumeSpecName: "config") pod "5042020b-f4a1-4f94-b15c-d20917ffa7d4" (UID: "5042020b-f4a1-4f94-b15c-d20917ffa7d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.035379 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.035413 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.035425 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.035439 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5042020b-f4a1-4f94-b15c-d20917ffa7d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.035453 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xpmb\" (UniqueName: \"kubernetes.io/projected/5042020b-f4a1-4f94-b15c-d20917ffa7d4-kube-api-access-6xpmb\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.049622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa","Type":"ContainerStarted","Data":"a4db9e3ef5cffa32c47a7f237e6c4667195787a9afe913b7f589de1ca9b25ac5"} Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.052894 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" event={"ID":"5042020b-f4a1-4f94-b15c-d20917ffa7d4","Type":"ContainerDied","Data":"4dcaa0cd451231d6b5aa71b11cd6a6c939974f47a1c85536400e119df66fab42"} Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.052929 4787 scope.go:117] "RemoveContainer" containerID="d7b0151356e040fbe6a6f942a30b1a60c2013ccce193da044311ce28fd14f27e" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.053011 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df7578dcc-hqbgd" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.069551 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e3700d-ee4d-44ad-8e26-c14acd5fc167","Type":"ContainerStarted","Data":"cf9bb3a7e242c5c9d31c856647fdeb29653a9e757623703c49add82438c2d0b7"} Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.069607 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e3700d-ee4d-44ad-8e26-c14acd5fc167","Type":"ContainerStarted","Data":"4f078299f827f6fcd33b4ba480febcfcb84aa0c3645674860a1faace1785653d"} Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.075834 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9f3f185-e54c-4700-8f16-3b2a0b153a28","Type":"ContainerStarted","Data":"1dbc676ddae94b10b05e619b686a758abb15598db3cc864a59c00e281fab5e5d"} Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.075907 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9f3f185-e54c-4700-8f16-3b2a0b153a28","Type":"ContainerStarted","Data":"53c607c505182ecaee186472e4a7db8346b14319e0e71ca3ef61aed1d65a2070"} Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.083481 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.083458522 podStartE2EDuration="3.083458522s" podCreationTimestamp="2026-01-26 19:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:46.081511704 +0000 UTC m=+5574.788647847" watchObservedRunningTime="2026-01-26 19:16:46.083458522 +0000 UTC m=+5574.790594655" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.098967 4787 scope.go:117] "RemoveContainer" containerID="4c1ad5645f51c2895fc49a72523fa100948eaedcb45f7bd05fc9dfe8930f3a97" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.118548 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.118361641 podStartE2EDuration="2.118361641s" podCreationTimestamp="2026-01-26 19:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:46.100008705 +0000 UTC m=+5574.807144838" watchObservedRunningTime="2026-01-26 19:16:46.118361641 +0000 UTC m=+5574.825497774" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.143915 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.143899611 podStartE2EDuration="2.143899611s" podCreationTimestamp="2026-01-26 19:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:46.121761843 +0000 UTC m=+5574.828897976" watchObservedRunningTime="2026-01-26 19:16:46.143899611 +0000 UTC m=+5574.851035744" Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.161978 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df7578dcc-hqbgd"] Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.171459 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7df7578dcc-hqbgd"] Jan 26 19:16:46 crc kubenswrapper[4787]: I0126 19:16:46.590031 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:16:46 crc kubenswrapper[4787]: E0126 19:16:46.590290 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:16:47 crc kubenswrapper[4787]: I0126 19:16:47.601553 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" path="/var/lib/kubelet/pods/5042020b-f4a1-4f94-b15c-d20917ffa7d4/volumes" Jan 26 19:16:48 crc kubenswrapper[4787]: I0126 19:16:48.791168 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:16:48 crc kubenswrapper[4787]: I0126 19:16:48.791275 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:16:49 crc kubenswrapper[4787]: I0126 19:16:49.911942 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.359155 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.783226 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ldfbg"] Jan 26 19:16:52 crc kubenswrapper[4787]: E0126 19:16:52.783881 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerName="dnsmasq-dns" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.783906 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerName="dnsmasq-dns" Jan 26 19:16:52 crc kubenswrapper[4787]: E0126 19:16:52.783937 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerName="init" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.783965 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerName="init" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.784185 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5042020b-f4a1-4f94-b15c-d20917ffa7d4" containerName="dnsmasq-dns" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.784929 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.789379 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.792791 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.801356 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ldfbg"] Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.862377 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.862469 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-scripts\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.862596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tln4q\" (UniqueName: \"kubernetes.io/projected/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-kube-api-access-tln4q\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.862622 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.964341 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tln4q\" (UniqueName: \"kubernetes.io/projected/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-kube-api-access-tln4q\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.964406 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.964450 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.964487 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-scripts\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.971117 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.972057 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.973509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-scripts\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:52 crc kubenswrapper[4787]: I0126 19:16:52.990528 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tln4q\" (UniqueName: \"kubernetes.io/projected/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-kube-api-access-tln4q\") pod \"nova-cell1-cell-mapping-ldfbg\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:53 crc kubenswrapper[4787]: I0126 19:16:53.107454 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:16:53 crc kubenswrapper[4787]: I0126 19:16:53.638788 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ldfbg"] Jan 26 19:16:53 crc kubenswrapper[4787]: W0126 19:16:53.640911 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a3bd2a_545a_41a2_b59a_0cb4d9cf5303.slice/crio-e0c5e329cfc6ffd9ae690cf4f147ed25c3002989c2a0de53ed509343c27bbd49 WatchSource:0}: Error finding container e0c5e329cfc6ffd9ae690cf4f147ed25c3002989c2a0de53ed509343c27bbd49: Status 404 returned error can't find the container with id e0c5e329cfc6ffd9ae690cf4f147ed25c3002989c2a0de53ed509343c27bbd49 Jan 26 19:16:53 crc kubenswrapper[4787]: I0126 19:16:53.791525 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:16:53 crc kubenswrapper[4787]: I0126 19:16:53.791575 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.154810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ldfbg" event={"ID":"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303","Type":"ContainerStarted","Data":"ef6b0bb4c6e58bcc60675934e65b14d02274978fb788762733a7642fa23aca95"} Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.154859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ldfbg" event={"ID":"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303","Type":"ContainerStarted","Data":"e0c5e329cfc6ffd9ae690cf4f147ed25c3002989c2a0de53ed509343c27bbd49"} Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.174859 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ldfbg" podStartSLOduration=2.174834443 podStartE2EDuration="2.174834443s" podCreationTimestamp="2026-01-26 19:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:16:54.172874065 +0000 UTC m=+5582.880010208" watchObservedRunningTime="2026-01-26 19:16:54.174834443 +0000 UTC m=+5582.881970576" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.442302 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.442354 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.873161 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.873428 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.912235 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 19:16:54 crc kubenswrapper[4787]: I0126 19:16:54.939077 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 19:16:55 crc kubenswrapper[4787]: I0126 19:16:55.217895 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 19:16:55 crc kubenswrapper[4787]: I0126 19:16:55.525365 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.63:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:16:55 crc kubenswrapper[4787]: I0126 19:16:55.525341 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.63:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:16:57 crc kubenswrapper[4787]: I0126 19:16:57.591209 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:16:58 crc kubenswrapper[4787]: I0126 19:16:58.211069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"7217ed496ed0b4c2a722c7ac3d16d8a1701ba2e7156a3c1353ad4ba7ede3fc2d"} Jan 26 19:16:58 crc kubenswrapper[4787]: I0126 19:16:58.688880 4787 scope.go:117] "RemoveContainer" containerID="53a8287007092fec69bec5a4bd0084dddeab30c9bdf73cf5de22f16486d1c7c0" Jan 26 19:16:59 crc kubenswrapper[4787]: I0126 19:16:59.220325 4787 generic.go:334] "Generic (PLEG): container finished" podID="c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" containerID="ef6b0bb4c6e58bcc60675934e65b14d02274978fb788762733a7642fa23aca95" exitCode=0 Jan 26 19:16:59 crc kubenswrapper[4787]: I0126 19:16:59.220421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ldfbg" event={"ID":"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303","Type":"ContainerDied","Data":"ef6b0bb4c6e58bcc60675934e65b14d02274978fb788762733a7642fa23aca95"} Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.571497 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.724488 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-combined-ca-bundle\") pod \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.724574 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-scripts\") pod \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.724676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data\") pod \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.725495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tln4q\" (UniqueName: \"kubernetes.io/projected/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-kube-api-access-tln4q\") pod \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.730324 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-scripts" (OuterVolumeSpecName: "scripts") pod "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" (UID: "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.730704 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-kube-api-access-tln4q" (OuterVolumeSpecName: "kube-api-access-tln4q") pod "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" (UID: "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303"). InnerVolumeSpecName "kube-api-access-tln4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:00 crc kubenswrapper[4787]: E0126 19:17:00.750888 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data podName:c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303 nodeName:}" failed. No retries permitted until 2026-01-26 19:17:01.250851161 +0000 UTC m=+5589.957987304 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data") pod "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" (UID: "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303") : error deleting /var/lib/kubelet/pods/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303/volume-subpaths: remove /var/lib/kubelet/pods/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303/volume-subpaths: no such file or directory Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.753850 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" (UID: "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.829077 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tln4q\" (UniqueName: \"kubernetes.io/projected/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-kube-api-access-tln4q\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.829111 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:00 crc kubenswrapper[4787]: I0126 19:17:00.829120 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.237903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ldfbg" event={"ID":"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303","Type":"ContainerDied","Data":"e0c5e329cfc6ffd9ae690cf4f147ed25c3002989c2a0de53ed509343c27bbd49"} Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.237939 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c5e329cfc6ffd9ae690cf4f147ed25c3002989c2a0de53ed509343c27bbd49" Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.238049 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ldfbg" Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.339092 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data\") pod \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\" (UID: \"c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303\") " Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.344470 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data" (OuterVolumeSpecName: "config-data") pod "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" (UID: "c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.417499 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.417831 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-log" containerID="cri-o://4f078299f827f6fcd33b4ba480febcfcb84aa0c3645674860a1faace1785653d" gracePeriod=30 Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.418051 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-api" containerID="cri-o://cf9bb3a7e242c5c9d31c856647fdeb29653a9e757623703c49add82438c2d0b7" gracePeriod=30 Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.434443 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.434680 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c9f3f185-e54c-4700-8f16-3b2a0b153a28" containerName="nova-scheduler-scheduler" containerID="cri-o://1dbc676ddae94b10b05e619b686a758abb15598db3cc864a59c00e281fab5e5d" gracePeriod=30 Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.441731 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.450154 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.450387 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-log" containerID="cri-o://4903e12236bfee76dcd750d8aa471c08901b0d2551442e824bfb6c3f871f314c" gracePeriod=30 Jan 26 19:17:01 crc kubenswrapper[4787]: I0126 19:17:01.450487 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-metadata" containerID="cri-o://a4db9e3ef5cffa32c47a7f237e6c4667195787a9afe913b7f589de1ca9b25ac5" gracePeriod=30 Jan 26 19:17:02 crc kubenswrapper[4787]: I0126 19:17:02.248636 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerID="4903e12236bfee76dcd750d8aa471c08901b0d2551442e824bfb6c3f871f314c" exitCode=143 Jan 26 19:17:02 crc kubenswrapper[4787]: I0126 19:17:02.248691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa","Type":"ContainerDied","Data":"4903e12236bfee76dcd750d8aa471c08901b0d2551442e824bfb6c3f871f314c"} Jan 26 19:17:02 crc kubenswrapper[4787]: I0126 19:17:02.251748 4787 generic.go:334] "Generic (PLEG): container finished" podID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerID="4f078299f827f6fcd33b4ba480febcfcb84aa0c3645674860a1faace1785653d" exitCode=143 Jan 26 19:17:02 crc kubenswrapper[4787]: I0126 19:17:02.251808 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e3700d-ee4d-44ad-8e26-c14acd5fc167","Type":"ContainerDied","Data":"4f078299f827f6fcd33b4ba480febcfcb84aa0c3645674860a1faace1785653d"} Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.268169 4787 generic.go:334] "Generic (PLEG): container finished" podID="c9f3f185-e54c-4700-8f16-3b2a0b153a28" containerID="1dbc676ddae94b10b05e619b686a758abb15598db3cc864a59c00e281fab5e5d" exitCode=0 Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.268294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9f3f185-e54c-4700-8f16-3b2a0b153a28","Type":"ContainerDied","Data":"1dbc676ddae94b10b05e619b686a758abb15598db3cc864a59c00e281fab5e5d"} Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.269064 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9f3f185-e54c-4700-8f16-3b2a0b153a28","Type":"ContainerDied","Data":"53c607c505182ecaee186472e4a7db8346b14319e0e71ca3ef61aed1d65a2070"} Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.269095 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53c607c505182ecaee186472e4a7db8346b14319e0e71ca3ef61aed1d65a2070" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.285117 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.395310 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-config-data\") pod \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.395349 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdxl\" (UniqueName: \"kubernetes.io/projected/c9f3f185-e54c-4700-8f16-3b2a0b153a28-kube-api-access-4qdxl\") pod \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.395411 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-combined-ca-bundle\") pod \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\" (UID: \"c9f3f185-e54c-4700-8f16-3b2a0b153a28\") " Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.400693 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f3f185-e54c-4700-8f16-3b2a0b153a28-kube-api-access-4qdxl" (OuterVolumeSpecName: "kube-api-access-4qdxl") pod "c9f3f185-e54c-4700-8f16-3b2a0b153a28" (UID: "c9f3f185-e54c-4700-8f16-3b2a0b153a28"). InnerVolumeSpecName "kube-api-access-4qdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.421523 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9f3f185-e54c-4700-8f16-3b2a0b153a28" (UID: "c9f3f185-e54c-4700-8f16-3b2a0b153a28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.421888 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-config-data" (OuterVolumeSpecName: "config-data") pod "c9f3f185-e54c-4700-8f16-3b2a0b153a28" (UID: "c9f3f185-e54c-4700-8f16-3b2a0b153a28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.496985 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.497488 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdxl\" (UniqueName: \"kubernetes.io/projected/c9f3f185-e54c-4700-8f16-3b2a0b153a28-kube-api-access-4qdxl\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:04 crc kubenswrapper[4787]: I0126 19:17:04.497564 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3f185-e54c-4700-8f16-3b2a0b153a28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.287298 4787 generic.go:334] "Generic (PLEG): container finished" podID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerID="cf9bb3a7e242c5c9d31c856647fdeb29653a9e757623703c49add82438c2d0b7" exitCode=0 Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.287342 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e3700d-ee4d-44ad-8e26-c14acd5fc167","Type":"ContainerDied","Data":"cf9bb3a7e242c5c9d31c856647fdeb29653a9e757623703c49add82438c2d0b7"} Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.290342 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerID="a4db9e3ef5cffa32c47a7f237e6c4667195787a9afe913b7f589de1ca9b25ac5" exitCode=0 Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.290426 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.290420 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa","Type":"ContainerDied","Data":"a4db9e3ef5cffa32c47a7f237e6c4667195787a9afe913b7f589de1ca9b25ac5"} Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.332938 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.345011 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.373268 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:17:05 crc kubenswrapper[4787]: E0126 19:17:05.373742 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" containerName="nova-manage" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.373764 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" containerName="nova-manage" Jan 26 19:17:05 crc kubenswrapper[4787]: E0126 19:17:05.373778 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f3f185-e54c-4700-8f16-3b2a0b153a28" containerName="nova-scheduler-scheduler" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.373785 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f3f185-e54c-4700-8f16-3b2a0b153a28" containerName="nova-scheduler-scheduler" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.374031 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" containerName="nova-manage" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.374059 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f3f185-e54c-4700-8f16-3b2a0b153a28" containerName="nova-scheduler-scheduler" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.374822 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.379182 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.394166 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.515527 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xclk7\" (UniqueName: \"kubernetes.io/projected/e67c45a4-ee2a-4de5-bbbd-214282bc8074-kube-api-access-xclk7\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.515983 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-config-data\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.516290 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.614291 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f3f185-e54c-4700-8f16-3b2a0b153a28" path="/var/lib/kubelet/pods/c9f3f185-e54c-4700-8f16-3b2a0b153a28/volumes" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.618376 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.618525 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xclk7\" (UniqueName: \"kubernetes.io/projected/e67c45a4-ee2a-4de5-bbbd-214282bc8074-kube-api-access-xclk7\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.618559 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-config-data\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.624183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-config-data\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.624640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.642676 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xclk7\" (UniqueName: \"kubernetes.io/projected/e67c45a4-ee2a-4de5-bbbd-214282bc8074-kube-api-access-xclk7\") pod \"nova-scheduler-0\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.695433 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.703565 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.705155 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.822512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87dq4\" (UniqueName: \"kubernetes.io/projected/59e3700d-ee4d-44ad-8e26-c14acd5fc167-kube-api-access-87dq4\") pod \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.822594 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-combined-ca-bundle\") pod \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.822635 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-logs\") pod \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.822666 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e3700d-ee4d-44ad-8e26-c14acd5fc167-logs\") pod \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.822686 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-config-data\") pod \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823141 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e3700d-ee4d-44ad-8e26-c14acd5fc167-logs" (OuterVolumeSpecName: "logs") pod "59e3700d-ee4d-44ad-8e26-c14acd5fc167" (UID: "59e3700d-ee4d-44ad-8e26-c14acd5fc167"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823190 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpq52\" (UniqueName: \"kubernetes.io/projected/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-kube-api-access-dpq52\") pod \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\" (UID: \"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823216 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-config-data\") pod \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823239 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-logs" (OuterVolumeSpecName: "logs") pod "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" (UID: "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823533 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-combined-ca-bundle\") pod \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\" (UID: \"59e3700d-ee4d-44ad-8e26-c14acd5fc167\") " Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823936 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.823964 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e3700d-ee4d-44ad-8e26-c14acd5fc167-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.826934 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e3700d-ee4d-44ad-8e26-c14acd5fc167-kube-api-access-87dq4" (OuterVolumeSpecName: "kube-api-access-87dq4") pod "59e3700d-ee4d-44ad-8e26-c14acd5fc167" (UID: "59e3700d-ee4d-44ad-8e26-c14acd5fc167"). InnerVolumeSpecName "kube-api-access-87dq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.827638 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-kube-api-access-dpq52" (OuterVolumeSpecName: "kube-api-access-dpq52") pod "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" (UID: "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa"). InnerVolumeSpecName "kube-api-access-dpq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.850890 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-config-data" (OuterVolumeSpecName: "config-data") pod "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" (UID: "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.857195 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59e3700d-ee4d-44ad-8e26-c14acd5fc167" (UID: "59e3700d-ee4d-44ad-8e26-c14acd5fc167"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.863135 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" (UID: "bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.865363 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-config-data" (OuterVolumeSpecName: "config-data") pod "59e3700d-ee4d-44ad-8e26-c14acd5fc167" (UID: "59e3700d-ee4d-44ad-8e26-c14acd5fc167"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.925252 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.925286 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e3700d-ee4d-44ad-8e26-c14acd5fc167-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.925299 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87dq4\" (UniqueName: \"kubernetes.io/projected/59e3700d-ee4d-44ad-8e26-c14acd5fc167-kube-api-access-87dq4\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.925310 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.925318 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:05 crc kubenswrapper[4787]: I0126 19:17:05.925326 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpq52\" (UniqueName: \"kubernetes.io/projected/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa-kube-api-access-dpq52\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.139879 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: W0126 19:17:06.145152 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode67c45a4_ee2a_4de5_bbbd_214282bc8074.slice/crio-0cf134ec57fb758b22aa908b76107a75814e177087c0d4be58713dbf06a41832 WatchSource:0}: Error finding container 0cf134ec57fb758b22aa908b76107a75814e177087c0d4be58713dbf06a41832: Status 404 returned error can't find the container with id 0cf134ec57fb758b22aa908b76107a75814e177087c0d4be58713dbf06a41832 Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.303037 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.303028 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa","Type":"ContainerDied","Data":"054666a1016352dafea0cff7a9c9819e03ca27f9f0ae9f1b2580f61a4d692fdf"} Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.303467 4787 scope.go:117] "RemoveContainer" containerID="a4db9e3ef5cffa32c47a7f237e6c4667195787a9afe913b7f589de1ca9b25ac5" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.306204 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67c45a4-ee2a-4de5-bbbd-214282bc8074","Type":"ContainerStarted","Data":"0cf134ec57fb758b22aa908b76107a75814e177087c0d4be58713dbf06a41832"} Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.307930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e3700d-ee4d-44ad-8e26-c14acd5fc167","Type":"ContainerDied","Data":"f41b2d358da4377bd279adf2951862b4c38e40345dcf6d766c5e5ddbca914d0b"} Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.308002 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.342664 4787 scope.go:117] "RemoveContainer" containerID="4903e12236bfee76dcd750d8aa471c08901b0d2551442e824bfb6c3f871f314c" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.346017 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.356117 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.368685 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.372898 4787 scope.go:117] "RemoveContainer" containerID="cf9bb3a7e242c5c9d31c856647fdeb29653a9e757623703c49add82438c2d0b7" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.385232 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: E0126 19:17:06.385657 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-log" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.385685 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-log" Jan 26 19:17:06 crc kubenswrapper[4787]: E0126 19:17:06.385721 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-api" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.385730 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-api" Jan 26 19:17:06 crc kubenswrapper[4787]: E0126 19:17:06.385751 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-metadata" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.385759 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-metadata" Jan 26 19:17:06 crc kubenswrapper[4787]: E0126 19:17:06.385769 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-log" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.385776 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-log" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.386016 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-log" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.386032 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" containerName="nova-api-api" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.386047 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-log" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.386059 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" containerName="nova-metadata-metadata" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.397747 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.397895 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.403085 4787 scope.go:117] "RemoveContainer" containerID="4f078299f827f6fcd33b4ba480febcfcb84aa0c3645674860a1faace1785653d" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.403310 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.408388 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.420652 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.425446 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.429640 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.430774 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.540304 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-config-data\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.540386 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ab3b41-2361-43af-9b1d-821e95213021-logs\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.540513 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czzp8\" (UniqueName: \"kubernetes.io/projected/d2ab3b41-2361-43af-9b1d-821e95213021-kube-api-access-czzp8\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.540978 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/110a4e2b-0431-43a0-a95d-7038e00e787a-logs\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.541171 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8j8\" (UniqueName: \"kubernetes.io/projected/110a4e2b-0431-43a0-a95d-7038e00e787a-kube-api-access-mx8j8\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.541248 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.541444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.541550 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.645983 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-config-data\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ab3b41-2361-43af-9b1d-821e95213021-logs\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646080 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czzp8\" (UniqueName: \"kubernetes.io/projected/d2ab3b41-2361-43af-9b1d-821e95213021-kube-api-access-czzp8\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646185 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/110a4e2b-0431-43a0-a95d-7038e00e787a-logs\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646251 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8j8\" (UniqueName: \"kubernetes.io/projected/110a4e2b-0431-43a0-a95d-7038e00e787a-kube-api-access-mx8j8\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646292 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646646 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ab3b41-2361-43af-9b1d-821e95213021-logs\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.646860 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/110a4e2b-0431-43a0-a95d-7038e00e787a-logs\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.651155 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.651465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-config-data\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.652637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.669371 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czzp8\" (UniqueName: \"kubernetes.io/projected/d2ab3b41-2361-43af-9b1d-821e95213021-kube-api-access-czzp8\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.671243 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data\") pod \"nova-metadata-0\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.671391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8j8\" (UniqueName: \"kubernetes.io/projected/110a4e2b-0431-43a0-a95d-7038e00e787a-kube-api-access-mx8j8\") pod \"nova-api-0\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " pod="openstack/nova-api-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.735096 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:17:06 crc kubenswrapper[4787]: I0126 19:17:06.745452 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:17:07 crc kubenswrapper[4787]: W0126 19:17:07.209609 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod110a4e2b_0431_43a0_a95d_7038e00e787a.slice/crio-5957912fc3d6d0cdf38a35efeed754a63efcf5114a48f29eeed5e090de6e6437 WatchSource:0}: Error finding container 5957912fc3d6d0cdf38a35efeed754a63efcf5114a48f29eeed5e090de6e6437: Status 404 returned error can't find the container with id 5957912fc3d6d0cdf38a35efeed754a63efcf5114a48f29eeed5e090de6e6437 Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.212630 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.227282 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.321357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"110a4e2b-0431-43a0-a95d-7038e00e787a","Type":"ContainerStarted","Data":"5957912fc3d6d0cdf38a35efeed754a63efcf5114a48f29eeed5e090de6e6437"} Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.322891 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67c45a4-ee2a-4de5-bbbd-214282bc8074","Type":"ContainerStarted","Data":"1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f"} Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.324616 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2ab3b41-2361-43af-9b1d-821e95213021","Type":"ContainerStarted","Data":"f01a620a08f2bb68b811b3bf564aac24f959d7d51882de09a83426a9aa31f4b4"} Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.347852 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.347828029 podStartE2EDuration="2.347828029s" podCreationTimestamp="2026-01-26 19:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:07.339300012 +0000 UTC m=+5596.046436145" watchObservedRunningTime="2026-01-26 19:17:07.347828029 +0000 UTC m=+5596.054964172" Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.602505 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e3700d-ee4d-44ad-8e26-c14acd5fc167" path="/var/lib/kubelet/pods/59e3700d-ee4d-44ad-8e26-c14acd5fc167/volumes" Jan 26 19:17:07 crc kubenswrapper[4787]: I0126 19:17:07.603519 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa" path="/var/lib/kubelet/pods/bf22f9b5-2a21-43b8-ae31-4f7c16cd45fa/volumes" Jan 26 19:17:08 crc kubenswrapper[4787]: I0126 19:17:08.335763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2ab3b41-2361-43af-9b1d-821e95213021","Type":"ContainerStarted","Data":"5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399"} Jan 26 19:17:08 crc kubenswrapper[4787]: I0126 19:17:08.336837 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2ab3b41-2361-43af-9b1d-821e95213021","Type":"ContainerStarted","Data":"05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0"} Jan 26 19:17:08 crc kubenswrapper[4787]: I0126 19:17:08.339832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"110a4e2b-0431-43a0-a95d-7038e00e787a","Type":"ContainerStarted","Data":"65f0fed7bce90085493a3b6dfc65c9510ac71076c0e5fd4b2e999fb6ef4d1bbe"} Jan 26 19:17:08 crc kubenswrapper[4787]: I0126 19:17:08.339861 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"110a4e2b-0431-43a0-a95d-7038e00e787a","Type":"ContainerStarted","Data":"95a740208506e3f242e34e12256e4d08213c6e72ebd02840d79f7499064972d9"} Jan 26 19:17:08 crc kubenswrapper[4787]: I0126 19:17:08.361237 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.361219771 podStartE2EDuration="2.361219771s" podCreationTimestamp="2026-01-26 19:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:08.355294467 +0000 UTC m=+5597.062430600" watchObservedRunningTime="2026-01-26 19:17:08.361219771 +0000 UTC m=+5597.068355904" Jan 26 19:17:08 crc kubenswrapper[4787]: I0126 19:17:08.382879 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.382862876 podStartE2EDuration="2.382862876s" podCreationTimestamp="2026-01-26 19:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:08.376653886 +0000 UTC m=+5597.083790019" watchObservedRunningTime="2026-01-26 19:17:08.382862876 +0000 UTC m=+5597.089999009" Jan 26 19:17:10 crc kubenswrapper[4787]: I0126 19:17:10.703713 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:17:11 crc kubenswrapper[4787]: I0126 19:17:11.735826 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:17:11 crc kubenswrapper[4787]: I0126 19:17:11.736144 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:17:15 crc kubenswrapper[4787]: I0126 19:17:15.703787 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 19:17:15 crc kubenswrapper[4787]: I0126 19:17:15.737268 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 19:17:16 crc kubenswrapper[4787]: I0126 19:17:16.440861 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 19:17:16 crc kubenswrapper[4787]: I0126 19:17:16.735590 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:17:16 crc kubenswrapper[4787]: I0126 19:17:16.735678 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:17:16 crc kubenswrapper[4787]: I0126 19:17:16.746000 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:17:16 crc kubenswrapper[4787]: I0126 19:17:16.746052 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:17:17 crc kubenswrapper[4787]: I0126 19:17:17.901151 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.68:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:17:17 crc kubenswrapper[4787]: I0126 19:17:17.901168 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.67:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:17:17 crc kubenswrapper[4787]: I0126 19:17:17.901190 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.68:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:17:17 crc kubenswrapper[4787]: I0126 19:17:17.901151 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.67:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.785455 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.786076 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.801397 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.802829 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.818760 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.819186 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.821885 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:17:26 crc kubenswrapper[4787]: I0126 19:17:26.823055 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.525348 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.529036 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.689410 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64f475f7d7-shg8h"] Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.691781 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.701838 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64f475f7d7-shg8h"] Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.834604 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-dns-svc\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.834675 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-config\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.834941 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cbd\" (UniqueName: \"kubernetes.io/projected/41d535d7-6dcc-42ce-aba4-00343d280c38-kube-api-access-24cbd\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.835076 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-sb\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.835124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-nb\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.937141 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-dns-svc\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.938088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-dns-svc\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.939566 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-config\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.939782 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24cbd\" (UniqueName: \"kubernetes.io/projected/41d535d7-6dcc-42ce-aba4-00343d280c38-kube-api-access-24cbd\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.939826 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-sb\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.939859 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-nb\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.940090 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-config\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.940791 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-sb\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.940819 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-nb\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:27 crc kubenswrapper[4787]: I0126 19:17:27.960774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cbd\" (UniqueName: \"kubernetes.io/projected/41d535d7-6dcc-42ce-aba4-00343d280c38-kube-api-access-24cbd\") pod \"dnsmasq-dns-64f475f7d7-shg8h\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:28 crc kubenswrapper[4787]: I0126 19:17:28.026566 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:28 crc kubenswrapper[4787]: I0126 19:17:28.490519 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64f475f7d7-shg8h"] Jan 26 19:17:28 crc kubenswrapper[4787]: W0126 19:17:28.503277 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41d535d7_6dcc_42ce_aba4_00343d280c38.slice/crio-8b098789455c96ab75071d2320205932988f057c51cc647dcf7eaa9db6886a58 WatchSource:0}: Error finding container 8b098789455c96ab75071d2320205932988f057c51cc647dcf7eaa9db6886a58: Status 404 returned error can't find the container with id 8b098789455c96ab75071d2320205932988f057c51cc647dcf7eaa9db6886a58 Jan 26 19:17:28 crc kubenswrapper[4787]: I0126 19:17:28.535598 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" event={"ID":"41d535d7-6dcc-42ce-aba4-00343d280c38","Type":"ContainerStarted","Data":"8b098789455c96ab75071d2320205932988f057c51cc647dcf7eaa9db6886a58"} Jan 26 19:17:29 crc kubenswrapper[4787]: I0126 19:17:29.543823 4787 generic.go:334] "Generic (PLEG): container finished" podID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerID="fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601" exitCode=0 Jan 26 19:17:29 crc kubenswrapper[4787]: I0126 19:17:29.543963 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" event={"ID":"41d535d7-6dcc-42ce-aba4-00343d280c38","Type":"ContainerDied","Data":"fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601"} Jan 26 19:17:30 crc kubenswrapper[4787]: I0126 19:17:30.556628 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" event={"ID":"41d535d7-6dcc-42ce-aba4-00343d280c38","Type":"ContainerStarted","Data":"6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd"} Jan 26 19:17:30 crc kubenswrapper[4787]: I0126 19:17:30.556873 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:30 crc kubenswrapper[4787]: I0126 19:17:30.582147 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" podStartSLOduration=3.582117856 podStartE2EDuration="3.582117856s" podCreationTimestamp="2026-01-26 19:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:30.577053543 +0000 UTC m=+5619.284189676" watchObservedRunningTime="2026-01-26 19:17:30.582117856 +0000 UTC m=+5619.289254019" Jan 26 19:17:38 crc kubenswrapper[4787]: I0126 19:17:38.028216 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:17:38 crc kubenswrapper[4787]: I0126 19:17:38.107971 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbb8b55cf-7dfbl"] Jan 26 19:17:38 crc kubenswrapper[4787]: I0126 19:17:38.109263 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerName="dnsmasq-dns" containerID="cri-o://ab4eefbd23c4e8d3b1101749ece18d520aedc9d3a07c0cb08cb15ef38468a5a7" gracePeriod=10 Jan 26 19:17:38 crc kubenswrapper[4787]: I0126 19:17:38.633346 4787 generic.go:334] "Generic (PLEG): container finished" podID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerID="ab4eefbd23c4e8d3b1101749ece18d520aedc9d3a07c0cb08cb15ef38468a5a7" exitCode=0 Jan 26 19:17:38 crc kubenswrapper[4787]: I0126 19:17:38.633379 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" event={"ID":"bda1037c-3b61-4045-aa57-6cefda2462bb","Type":"ContainerDied","Data":"ab4eefbd23c4e8d3b1101749ece18d520aedc9d3a07c0cb08cb15ef38468a5a7"} Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.285847 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.342447 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-config\") pod \"bda1037c-3b61-4045-aa57-6cefda2462bb\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.342516 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2db2z\" (UniqueName: \"kubernetes.io/projected/bda1037c-3b61-4045-aa57-6cefda2462bb-kube-api-access-2db2z\") pod \"bda1037c-3b61-4045-aa57-6cefda2462bb\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.342546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-dns-svc\") pod \"bda1037c-3b61-4045-aa57-6cefda2462bb\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.342570 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-sb\") pod \"bda1037c-3b61-4045-aa57-6cefda2462bb\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.342688 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-nb\") pod \"bda1037c-3b61-4045-aa57-6cefda2462bb\" (UID: \"bda1037c-3b61-4045-aa57-6cefda2462bb\") " Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.353687 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda1037c-3b61-4045-aa57-6cefda2462bb-kube-api-access-2db2z" (OuterVolumeSpecName: "kube-api-access-2db2z") pod "bda1037c-3b61-4045-aa57-6cefda2462bb" (UID: "bda1037c-3b61-4045-aa57-6cefda2462bb"). InnerVolumeSpecName "kube-api-access-2db2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.392755 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bda1037c-3b61-4045-aa57-6cefda2462bb" (UID: "bda1037c-3b61-4045-aa57-6cefda2462bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.393407 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-config" (OuterVolumeSpecName: "config") pod "bda1037c-3b61-4045-aa57-6cefda2462bb" (UID: "bda1037c-3b61-4045-aa57-6cefda2462bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.400799 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bda1037c-3b61-4045-aa57-6cefda2462bb" (UID: "bda1037c-3b61-4045-aa57-6cefda2462bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.409162 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bda1037c-3b61-4045-aa57-6cefda2462bb" (UID: "bda1037c-3b61-4045-aa57-6cefda2462bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.445235 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.445274 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2db2z\" (UniqueName: \"kubernetes.io/projected/bda1037c-3b61-4045-aa57-6cefda2462bb-kube-api-access-2db2z\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.445287 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.445300 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.445312 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda1037c-3b61-4045-aa57-6cefda2462bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.642641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" event={"ID":"bda1037c-3b61-4045-aa57-6cefda2462bb","Type":"ContainerDied","Data":"32eacedf29af11a87198e9159528e5b2453996bfecf70727a7864c610720cc73"} Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.642708 4787 scope.go:117] "RemoveContainer" containerID="ab4eefbd23c4e8d3b1101749ece18d520aedc9d3a07c0cb08cb15ef38468a5a7" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.642723 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb8b55cf-7dfbl" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.670923 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbb8b55cf-7dfbl"] Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.671344 4787 scope.go:117] "RemoveContainer" containerID="4175050a375c3b89f70bded51b149c1650d7e62347cfbee773212fb038f77736" Jan 26 19:17:39 crc kubenswrapper[4787]: I0126 19:17:39.680520 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbb8b55cf-7dfbl"] Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.183456 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2468z"] Jan 26 19:17:41 crc kubenswrapper[4787]: E0126 19:17:41.184207 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerName="dnsmasq-dns" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.184224 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerName="dnsmasq-dns" Jan 26 19:17:41 crc kubenswrapper[4787]: E0126 19:17:41.184248 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerName="init" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.184254 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerName="init" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.184426 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" containerName="dnsmasq-dns" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.185125 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.199231 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2468z"] Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.282572 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84chk\" (UniqueName: \"kubernetes.io/projected/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-kube-api-access-84chk\") pod \"cinder-db-create-2468z\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.282775 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-operator-scripts\") pod \"cinder-db-create-2468z\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.284382 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e031-account-create-update-qzx5b"] Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.285526 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.288673 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.292751 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e031-account-create-update-qzx5b"] Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.384548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvd5m\" (UniqueName: \"kubernetes.io/projected/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-kube-api-access-xvd5m\") pod \"cinder-e031-account-create-update-qzx5b\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.384678 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84chk\" (UniqueName: \"kubernetes.io/projected/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-kube-api-access-84chk\") pod \"cinder-db-create-2468z\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.384724 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-operator-scripts\") pod \"cinder-db-create-2468z\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.384772 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-operator-scripts\") pod \"cinder-e031-account-create-update-qzx5b\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.385663 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-operator-scripts\") pod \"cinder-db-create-2468z\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.405823 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84chk\" (UniqueName: \"kubernetes.io/projected/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-kube-api-access-84chk\") pod \"cinder-db-create-2468z\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.486918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-operator-scripts\") pod \"cinder-e031-account-create-update-qzx5b\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.486996 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvd5m\" (UniqueName: \"kubernetes.io/projected/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-kube-api-access-xvd5m\") pod \"cinder-e031-account-create-update-qzx5b\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.487686 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-operator-scripts\") pod \"cinder-e031-account-create-update-qzx5b\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.502886 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2468z" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.504633 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvd5m\" (UniqueName: \"kubernetes.io/projected/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-kube-api-access-xvd5m\") pod \"cinder-e031-account-create-update-qzx5b\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.600405 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda1037c-3b61-4045-aa57-6cefda2462bb" path="/var/lib/kubelet/pods/bda1037c-3b61-4045-aa57-6cefda2462bb/volumes" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.612295 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:41 crc kubenswrapper[4787]: I0126 19:17:41.976203 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2468z"] Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.088875 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e031-account-create-update-qzx5b"] Jan 26 19:17:42 crc kubenswrapper[4787]: W0126 19:17:42.101283 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03d0ce5d_8a55_40b3_bca2_ab3439ceec22.slice/crio-e9abb8d26784d785f8d6949488da0484dd22ff21f924aa3c0cf1fbaacf1b0c7a WatchSource:0}: Error finding container e9abb8d26784d785f8d6949488da0484dd22ff21f924aa3c0cf1fbaacf1b0c7a: Status 404 returned error can't find the container with id e9abb8d26784d785f8d6949488da0484dd22ff21f924aa3c0cf1fbaacf1b0c7a Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.692373 4787 generic.go:334] "Generic (PLEG): container finished" podID="03d0ce5d-8a55-40b3-bca2-ab3439ceec22" containerID="2bd48796434870848e64f1b66dd8bc1346dfc073af5829f5833f29a52641af5e" exitCode=0 Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.692419 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e031-account-create-update-qzx5b" event={"ID":"03d0ce5d-8a55-40b3-bca2-ab3439ceec22","Type":"ContainerDied","Data":"2bd48796434870848e64f1b66dd8bc1346dfc073af5829f5833f29a52641af5e"} Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.692899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e031-account-create-update-qzx5b" event={"ID":"03d0ce5d-8a55-40b3-bca2-ab3439ceec22","Type":"ContainerStarted","Data":"e9abb8d26784d785f8d6949488da0484dd22ff21f924aa3c0cf1fbaacf1b0c7a"} Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.695031 4787 generic.go:334] "Generic (PLEG): container finished" podID="eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" containerID="0617d01c395c7926041a0740b485dcee446a92d03ee4ec34a7139e80c333a264" exitCode=0 Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.695063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2468z" event={"ID":"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc","Type":"ContainerDied","Data":"0617d01c395c7926041a0740b485dcee446a92d03ee4ec34a7139e80c333a264"} Jan 26 19:17:42 crc kubenswrapper[4787]: I0126 19:17:42.695115 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2468z" event={"ID":"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc","Type":"ContainerStarted","Data":"5ae5c149ff57d658f466a0295da8401df3745d35c5239985265fea8207eddbdc"} Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.102860 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2468z" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.111230 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.137324 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvd5m\" (UniqueName: \"kubernetes.io/projected/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-kube-api-access-xvd5m\") pod \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.137379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84chk\" (UniqueName: \"kubernetes.io/projected/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-kube-api-access-84chk\") pod \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.137424 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-operator-scripts\") pod \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\" (UID: \"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc\") " Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.137560 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-operator-scripts\") pod \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\" (UID: \"03d0ce5d-8a55-40b3-bca2-ab3439ceec22\") " Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.138098 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" (UID: "eebd6fa5-1a6d-4749-ada3-fdd8c774abdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.138431 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03d0ce5d-8a55-40b3-bca2-ab3439ceec22" (UID: "03d0ce5d-8a55-40b3-bca2-ab3439ceec22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.142527 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-kube-api-access-xvd5m" (OuterVolumeSpecName: "kube-api-access-xvd5m") pod "03d0ce5d-8a55-40b3-bca2-ab3439ceec22" (UID: "03d0ce5d-8a55-40b3-bca2-ab3439ceec22"). InnerVolumeSpecName "kube-api-access-xvd5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.143015 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-kube-api-access-84chk" (OuterVolumeSpecName: "kube-api-access-84chk") pod "eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" (UID: "eebd6fa5-1a6d-4749-ada3-fdd8c774abdc"). InnerVolumeSpecName "kube-api-access-84chk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.239283 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvd5m\" (UniqueName: \"kubernetes.io/projected/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-kube-api-access-xvd5m\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.239322 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84chk\" (UniqueName: \"kubernetes.io/projected/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-kube-api-access-84chk\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.239332 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.239341 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d0ce5d-8a55-40b3-bca2-ab3439ceec22-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.713831 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2468z" event={"ID":"eebd6fa5-1a6d-4749-ada3-fdd8c774abdc","Type":"ContainerDied","Data":"5ae5c149ff57d658f466a0295da8401df3745d35c5239985265fea8207eddbdc"} Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.713859 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2468z" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.713876 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ae5c149ff57d658f466a0295da8401df3745d35c5239985265fea8207eddbdc" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.716461 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e031-account-create-update-qzx5b" event={"ID":"03d0ce5d-8a55-40b3-bca2-ab3439ceec22","Type":"ContainerDied","Data":"e9abb8d26784d785f8d6949488da0484dd22ff21f924aa3c0cf1fbaacf1b0c7a"} Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.716503 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9abb8d26784d785f8d6949488da0484dd22ff21f924aa3c0cf1fbaacf1b0c7a" Jan 26 19:17:44 crc kubenswrapper[4787]: I0126 19:17:44.716554 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e031-account-create-update-qzx5b" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.419026 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7589r"] Jan 26 19:17:46 crc kubenswrapper[4787]: E0126 19:17:46.419701 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d0ce5d-8a55-40b3-bca2-ab3439ceec22" containerName="mariadb-account-create-update" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.419714 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d0ce5d-8a55-40b3-bca2-ab3439ceec22" containerName="mariadb-account-create-update" Jan 26 19:17:46 crc kubenswrapper[4787]: E0126 19:17:46.419737 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" containerName="mariadb-database-create" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.419743 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" containerName="mariadb-database-create" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.419917 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d0ce5d-8a55-40b3-bca2-ab3439ceec22" containerName="mariadb-account-create-update" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.419927 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" containerName="mariadb-database-create" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.420626 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.422496 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-txgph" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.422729 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.422875 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.429681 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7589r"] Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.528736 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/582540f4-da0d-43fa-9bb8-fe6642cd5af0-etc-machine-id\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.528929 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-db-sync-config-data\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.529013 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nf7b\" (UniqueName: \"kubernetes.io/projected/582540f4-da0d-43fa-9bb8-fe6642cd5af0-kube-api-access-6nf7b\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.529049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-scripts\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.529161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-config-data\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.529298 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-combined-ca-bundle\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631122 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-db-sync-config-data\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631186 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nf7b\" (UniqueName: \"kubernetes.io/projected/582540f4-da0d-43fa-9bb8-fe6642cd5af0-kube-api-access-6nf7b\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-scripts\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-config-data\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631310 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-combined-ca-bundle\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631421 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/582540f4-da0d-43fa-9bb8-fe6642cd5af0-etc-machine-id\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.631845 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/582540f4-da0d-43fa-9bb8-fe6642cd5af0-etc-machine-id\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.636776 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-db-sync-config-data\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.637455 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-combined-ca-bundle\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.639299 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-scripts\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.644207 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-config-data\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.650979 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nf7b\" (UniqueName: \"kubernetes.io/projected/582540f4-da0d-43fa-9bb8-fe6642cd5af0-kube-api-access-6nf7b\") pod \"cinder-db-sync-7589r\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:46 crc kubenswrapper[4787]: I0126 19:17:46.769002 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:47 crc kubenswrapper[4787]: I0126 19:17:47.198759 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7589r"] Jan 26 19:17:47 crc kubenswrapper[4787]: I0126 19:17:47.742875 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7589r" event={"ID":"582540f4-da0d-43fa-9bb8-fe6642cd5af0","Type":"ContainerStarted","Data":"50989b0d93e3364600e5baa031ba0842d9f0573a8046d4b640c98574436dfae2"} Jan 26 19:17:48 crc kubenswrapper[4787]: I0126 19:17:48.754170 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7589r" event={"ID":"582540f4-da0d-43fa-9bb8-fe6642cd5af0","Type":"ContainerStarted","Data":"b9642c5edb6006ef9a0d56f60ae9debec0adbaed4ea73a5e8b2b1f0a66560c59"} Jan 26 19:17:48 crc kubenswrapper[4787]: I0126 19:17:48.786254 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7589r" podStartSLOduration=2.7862314 podStartE2EDuration="2.7862314s" podCreationTimestamp="2026-01-26 19:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:48.778689457 +0000 UTC m=+5637.485825610" watchObservedRunningTime="2026-01-26 19:17:48.7862314 +0000 UTC m=+5637.493367533" Jan 26 19:17:50 crc kubenswrapper[4787]: I0126 19:17:50.771697 4787 generic.go:334] "Generic (PLEG): container finished" podID="582540f4-da0d-43fa-9bb8-fe6642cd5af0" containerID="b9642c5edb6006ef9a0d56f60ae9debec0adbaed4ea73a5e8b2b1f0a66560c59" exitCode=0 Jan 26 19:17:50 crc kubenswrapper[4787]: I0126 19:17:50.771767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7589r" event={"ID":"582540f4-da0d-43fa-9bb8-fe6642cd5af0","Type":"ContainerDied","Data":"b9642c5edb6006ef9a0d56f60ae9debec0adbaed4ea73a5e8b2b1f0a66560c59"} Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.153425 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.339863 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nf7b\" (UniqueName: \"kubernetes.io/projected/582540f4-da0d-43fa-9bb8-fe6642cd5af0-kube-api-access-6nf7b\") pod \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.339902 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/582540f4-da0d-43fa-9bb8-fe6642cd5af0-etc-machine-id\") pod \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.339921 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-db-sync-config-data\") pod \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.339998 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-scripts\") pod \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.340049 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-config-data\") pod \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.340083 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-combined-ca-bundle\") pod \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\" (UID: \"582540f4-da0d-43fa-9bb8-fe6642cd5af0\") " Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.340565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/582540f4-da0d-43fa-9bb8-fe6642cd5af0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "582540f4-da0d-43fa-9bb8-fe6642cd5af0" (UID: "582540f4-da0d-43fa-9bb8-fe6642cd5af0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.345492 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-scripts" (OuterVolumeSpecName: "scripts") pod "582540f4-da0d-43fa-9bb8-fe6642cd5af0" (UID: "582540f4-da0d-43fa-9bb8-fe6642cd5af0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.345619 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582540f4-da0d-43fa-9bb8-fe6642cd5af0-kube-api-access-6nf7b" (OuterVolumeSpecName: "kube-api-access-6nf7b") pod "582540f4-da0d-43fa-9bb8-fe6642cd5af0" (UID: "582540f4-da0d-43fa-9bb8-fe6642cd5af0"). InnerVolumeSpecName "kube-api-access-6nf7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.348146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "582540f4-da0d-43fa-9bb8-fe6642cd5af0" (UID: "582540f4-da0d-43fa-9bb8-fe6642cd5af0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.368537 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "582540f4-da0d-43fa-9bb8-fe6642cd5af0" (UID: "582540f4-da0d-43fa-9bb8-fe6642cd5af0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.434395 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-config-data" (OuterVolumeSpecName: "config-data") pod "582540f4-da0d-43fa-9bb8-fe6642cd5af0" (UID: "582540f4-da0d-43fa-9bb8-fe6642cd5af0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.442405 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nf7b\" (UniqueName: \"kubernetes.io/projected/582540f4-da0d-43fa-9bb8-fe6642cd5af0-kube-api-access-6nf7b\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.442473 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/582540f4-da0d-43fa-9bb8-fe6642cd5af0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.442489 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.442500 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.442510 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.442522 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582540f4-da0d-43fa-9bb8-fe6642cd5af0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.796386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7589r" event={"ID":"582540f4-da0d-43fa-9bb8-fe6642cd5af0","Type":"ContainerDied","Data":"50989b0d93e3364600e5baa031ba0842d9f0573a8046d4b640c98574436dfae2"} Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.796428 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50989b0d93e3364600e5baa031ba0842d9f0573a8046d4b640c98574436dfae2" Jan 26 19:17:52 crc kubenswrapper[4787]: I0126 19:17:52.796887 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7589r" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.498019 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c8944c79c-64xh4"] Jan 26 19:17:53 crc kubenswrapper[4787]: E0126 19:17:53.498544 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582540f4-da0d-43fa-9bb8-fe6642cd5af0" containerName="cinder-db-sync" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.498560 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="582540f4-da0d-43fa-9bb8-fe6642cd5af0" containerName="cinder-db-sync" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.498728 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="582540f4-da0d-43fa-9bb8-fe6642cd5af0" containerName="cinder-db-sync" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.499777 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.508309 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8944c79c-64xh4"] Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.633216 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.635096 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.638092 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-txgph" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.638335 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.638468 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.638599 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.648733 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.666304 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-config\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.666365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.666424 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-dns-svc\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.666455 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v44h\" (UniqueName: \"kubernetes.io/projected/93e77987-fd7e-41c8-af53-afbbc13b6f5b-kube-api-access-5v44h\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.666498 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v44h\" (UniqueName: \"kubernetes.io/projected/93e77987-fd7e-41c8-af53-afbbc13b6f5b-kube-api-access-5v44h\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768579 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768615 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768672 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768711 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data-custom\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqqf4\" (UniqueName: \"kubernetes.io/projected/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-kube-api-access-tqqf4\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768765 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-config\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768786 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-logs\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768830 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-scripts\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768858 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768909 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.768984 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-dns-svc\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.769842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-config\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.770256 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-dns-svc\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.770437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.770523 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.799157 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v44h\" (UniqueName: \"kubernetes.io/projected/93e77987-fd7e-41c8-af53-afbbc13b6f5b-kube-api-access-5v44h\") pod \"dnsmasq-dns-6c8944c79c-64xh4\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.824320 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.870369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.870520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.870577 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.870678 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data-custom\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.870695 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.870714 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqf4\" (UniqueName: \"kubernetes.io/projected/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-kube-api-access-tqqf4\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.871089 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-logs\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.871209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-scripts\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.871514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-logs\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.878599 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-scripts\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.878668 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.878890 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data-custom\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.879355 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.893820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqf4\" (UniqueName: \"kubernetes.io/projected/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-kube-api-access-tqqf4\") pod \"cinder-api-0\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " pod="openstack/cinder-api-0" Jan 26 19:17:53 crc kubenswrapper[4787]: I0126 19:17:53.979201 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:17:54 crc kubenswrapper[4787]: I0126 19:17:54.342543 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8944c79c-64xh4"] Jan 26 19:17:54 crc kubenswrapper[4787]: W0126 19:17:54.468979 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c5a6ae7_830e_4eb4_bc70_cb12c45f76e4.slice/crio-72c2c2b6b656c5e87f86b7af85dea628e0f83f0b1236539cec34a8af53606fab WatchSource:0}: Error finding container 72c2c2b6b656c5e87f86b7af85dea628e0f83f0b1236539cec34a8af53606fab: Status 404 returned error can't find the container with id 72c2c2b6b656c5e87f86b7af85dea628e0f83f0b1236539cec34a8af53606fab Jan 26 19:17:54 crc kubenswrapper[4787]: I0126 19:17:54.474011 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:17:54 crc kubenswrapper[4787]: I0126 19:17:54.832517 4787 generic.go:334] "Generic (PLEG): container finished" podID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerID="24a89068cba3a824087db65dac8b4a4a06bfd0288b9ee5c8c9d970b1a34f5d5e" exitCode=0 Jan 26 19:17:54 crc kubenswrapper[4787]: I0126 19:17:54.832793 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" event={"ID":"93e77987-fd7e-41c8-af53-afbbc13b6f5b","Type":"ContainerDied","Data":"24a89068cba3a824087db65dac8b4a4a06bfd0288b9ee5c8c9d970b1a34f5d5e"} Jan 26 19:17:54 crc kubenswrapper[4787]: I0126 19:17:54.832848 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" event={"ID":"93e77987-fd7e-41c8-af53-afbbc13b6f5b","Type":"ContainerStarted","Data":"406447dd4d1cecb00d552ca39893639ddf4ddb1ee4961891d605f09496511cee"} Jan 26 19:17:54 crc kubenswrapper[4787]: I0126 19:17:54.837657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4","Type":"ContainerStarted","Data":"72c2c2b6b656c5e87f86b7af85dea628e0f83f0b1236539cec34a8af53606fab"} Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.850241 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4","Type":"ContainerStarted","Data":"af15adcf625083d538337572d78e60abf215805e2c9fb5536556e46f1165c681"} Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.850937 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.850979 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4","Type":"ContainerStarted","Data":"5aeaffaf31caa04252f2619ace5fdbc36426851cc2e073d2aead7a0f187f9f8d"} Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.853003 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" event={"ID":"93e77987-fd7e-41c8-af53-afbbc13b6f5b","Type":"ContainerStarted","Data":"c2bc914e756812c196e27bc416cd33fd1c4af78ef27483bbc29cb6cdedd34e6b"} Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.853132 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.883163 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.883142799 podStartE2EDuration="2.883142799s" podCreationTimestamp="2026-01-26 19:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:55.873862113 +0000 UTC m=+5644.580998246" watchObservedRunningTime="2026-01-26 19:17:55.883142799 +0000 UTC m=+5644.590278932" Jan 26 19:17:55 crc kubenswrapper[4787]: I0126 19:17:55.902850 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" podStartSLOduration=2.902828928 podStartE2EDuration="2.902828928s" podCreationTimestamp="2026-01-26 19:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:17:55.893801518 +0000 UTC m=+5644.600937651" watchObservedRunningTime="2026-01-26 19:17:55.902828928 +0000 UTC m=+5644.609965051" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.587666 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xfqgs"] Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.590559 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.598563 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfqgs"] Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.660397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szg4w\" (UniqueName: \"kubernetes.io/projected/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-kube-api-access-szg4w\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.660523 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-utilities\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.660554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-catalog-content\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.761982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szg4w\" (UniqueName: \"kubernetes.io/projected/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-kube-api-access-szg4w\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.762036 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-utilities\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.762083 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-catalog-content\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.762603 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-catalog-content\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.762854 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-utilities\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.784020 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szg4w\" (UniqueName: \"kubernetes.io/projected/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-kube-api-access-szg4w\") pod \"redhat-operators-xfqgs\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:58 crc kubenswrapper[4787]: I0126 19:17:58.923663 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:17:59 crc kubenswrapper[4787]: I0126 19:17:59.395290 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfqgs"] Jan 26 19:17:59 crc kubenswrapper[4787]: I0126 19:17:59.891406 4787 generic.go:334] "Generic (PLEG): container finished" podID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerID="b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd" exitCode=0 Jan 26 19:17:59 crc kubenswrapper[4787]: I0126 19:17:59.891527 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerDied","Data":"b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd"} Jan 26 19:17:59 crc kubenswrapper[4787]: I0126 19:17:59.891709 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerStarted","Data":"40ff67ae7450096595af800588a064eb31d09a5ee1d99ddd92886149643143ec"} Jan 26 19:17:59 crc kubenswrapper[4787]: I0126 19:17:59.894418 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:18:00 crc kubenswrapper[4787]: I0126 19:18:00.901755 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerStarted","Data":"2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2"} Jan 26 19:18:01 crc kubenswrapper[4787]: I0126 19:18:01.913787 4787 generic.go:334] "Generic (PLEG): container finished" podID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerID="2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2" exitCode=0 Jan 26 19:18:01 crc kubenswrapper[4787]: I0126 19:18:01.913847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerDied","Data":"2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2"} Jan 26 19:18:02 crc kubenswrapper[4787]: I0126 19:18:02.926499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerStarted","Data":"94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8"} Jan 26 19:18:02 crc kubenswrapper[4787]: I0126 19:18:02.950187 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xfqgs" podStartSLOduration=2.284379354 podStartE2EDuration="4.95016244s" podCreationTimestamp="2026-01-26 19:17:58 +0000 UTC" firstStartedPulling="2026-01-26 19:17:59.89418876 +0000 UTC m=+5648.601324893" lastFinishedPulling="2026-01-26 19:18:02.559971836 +0000 UTC m=+5651.267107979" observedRunningTime="2026-01-26 19:18:02.948630833 +0000 UTC m=+5651.655766956" watchObservedRunningTime="2026-01-26 19:18:02.95016244 +0000 UTC m=+5651.657298573" Jan 26 19:18:03 crc kubenswrapper[4787]: I0126 19:18:03.826151 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:18:03 crc kubenswrapper[4787]: I0126 19:18:03.890181 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64f475f7d7-shg8h"] Jan 26 19:18:03 crc kubenswrapper[4787]: I0126 19:18:03.890406 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerName="dnsmasq-dns" containerID="cri-o://6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd" gracePeriod=10 Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.394366 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.477777 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24cbd\" (UniqueName: \"kubernetes.io/projected/41d535d7-6dcc-42ce-aba4-00343d280c38-kube-api-access-24cbd\") pod \"41d535d7-6dcc-42ce-aba4-00343d280c38\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.477854 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-config\") pod \"41d535d7-6dcc-42ce-aba4-00343d280c38\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.477886 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-nb\") pod \"41d535d7-6dcc-42ce-aba4-00343d280c38\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.478046 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-dns-svc\") pod \"41d535d7-6dcc-42ce-aba4-00343d280c38\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.478067 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-sb\") pod \"41d535d7-6dcc-42ce-aba4-00343d280c38\" (UID: \"41d535d7-6dcc-42ce-aba4-00343d280c38\") " Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.513628 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d535d7-6dcc-42ce-aba4-00343d280c38-kube-api-access-24cbd" (OuterVolumeSpecName: "kube-api-access-24cbd") pod "41d535d7-6dcc-42ce-aba4-00343d280c38" (UID: "41d535d7-6dcc-42ce-aba4-00343d280c38"). InnerVolumeSpecName "kube-api-access-24cbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.558594 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41d535d7-6dcc-42ce-aba4-00343d280c38" (UID: "41d535d7-6dcc-42ce-aba4-00343d280c38"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.581524 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41d535d7-6dcc-42ce-aba4-00343d280c38" (UID: "41d535d7-6dcc-42ce-aba4-00343d280c38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.582418 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24cbd\" (UniqueName: \"kubernetes.io/projected/41d535d7-6dcc-42ce-aba4-00343d280c38-kube-api-access-24cbd\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.582462 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.582476 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.602502 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-config" (OuterVolumeSpecName: "config") pod "41d535d7-6dcc-42ce-aba4-00343d280c38" (UID: "41d535d7-6dcc-42ce-aba4-00343d280c38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.603653 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41d535d7-6dcc-42ce-aba4-00343d280c38" (UID: "41d535d7-6dcc-42ce-aba4-00343d280c38"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.685524 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.685556 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41d535d7-6dcc-42ce-aba4-00343d280c38-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.965852 4787 generic.go:334] "Generic (PLEG): container finished" podID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerID="6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd" exitCode=0 Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.965919 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.965980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" event={"ID":"41d535d7-6dcc-42ce-aba4-00343d280c38","Type":"ContainerDied","Data":"6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd"} Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.968083 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f475f7d7-shg8h" event={"ID":"41d535d7-6dcc-42ce-aba4-00343d280c38","Type":"ContainerDied","Data":"8b098789455c96ab75071d2320205932988f057c51cc647dcf7eaa9db6886a58"} Jan 26 19:18:04 crc kubenswrapper[4787]: I0126 19:18:04.968127 4787 scope.go:117] "RemoveContainer" containerID="6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.001323 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64f475f7d7-shg8h"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.011587 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64f475f7d7-shg8h"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.029284 4787 scope.go:117] "RemoveContainer" containerID="fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.061146 4787 scope.go:117] "RemoveContainer" containerID="6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd" Jan 26 19:18:05 crc kubenswrapper[4787]: E0126 19:18:05.061691 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd\": container with ID starting with 6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd not found: ID does not exist" containerID="6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.061747 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd"} err="failed to get container status \"6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd\": rpc error: code = NotFound desc = could not find container \"6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd\": container with ID starting with 6a01ad81f32dd11f90fbd67712abe6c0053a173ae6ba3ac2c3849cf55297badd not found: ID does not exist" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.061771 4787 scope.go:117] "RemoveContainer" containerID="fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601" Jan 26 19:18:05 crc kubenswrapper[4787]: E0126 19:18:05.065467 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601\": container with ID starting with fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601 not found: ID does not exist" containerID="fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.065520 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601"} err="failed to get container status \"fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601\": rpc error: code = NotFound desc = could not find container \"fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601\": container with ID starting with fd36dd5042454ed02c1d3d6194998f82329b4670392c52af040f24501a3be601 not found: ID does not exist" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.600608 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" path="/var/lib/kubelet/pods/41d535d7-6dcc-42ce-aba4-00343d280c38/volumes" Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.745722 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.746130 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.760295 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.760836 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-log" containerID="cri-o://05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.761299 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-metadata" containerID="cri-o://5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.770052 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.770544 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerName="nova-scheduler-scheduler" containerID="cri-o://1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.782227 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.782483 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="20760e60-b469-4023-aacf-0fa14629a665" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6bb42ebd049344913b8cd887408581aa24cf85099614f8e839f321047fa210e0" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.823081 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.823397 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-log" containerID="cri-o://95a740208506e3f242e34e12256e4d08213c6e72ebd02840d79f7499064972d9" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.823542 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-api" containerID="cri-o://65f0fed7bce90085493a3b6dfc65c9510ac71076c0e5fd4b2e999fb6ef4d1bbe" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.845476 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.846918 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" containerName="nova-cell1-conductor-conductor" containerID="cri-o://d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" gracePeriod=30 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.988490 4787 generic.go:334] "Generic (PLEG): container finished" podID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerID="95a740208506e3f242e34e12256e4d08213c6e72ebd02840d79f7499064972d9" exitCode=143 Jan 26 19:18:05 crc kubenswrapper[4787]: I0126 19:18:05.988564 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"110a4e2b-0431-43a0-a95d-7038e00e787a","Type":"ContainerDied","Data":"95a740208506e3f242e34e12256e4d08213c6e72ebd02840d79f7499064972d9"} Jan 26 19:18:06 crc kubenswrapper[4787]: I0126 19:18:06.011390 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2ab3b41-2361-43af-9b1d-821e95213021" containerID="05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0" exitCode=143 Jan 26 19:18:06 crc kubenswrapper[4787]: I0126 19:18:06.011479 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2ab3b41-2361-43af-9b1d-821e95213021","Type":"ContainerDied","Data":"05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0"} Jan 26 19:18:06 crc kubenswrapper[4787]: I0126 19:18:06.198746 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.026679 4787 generic.go:334] "Generic (PLEG): container finished" podID="20760e60-b469-4023-aacf-0fa14629a665" containerID="6bb42ebd049344913b8cd887408581aa24cf85099614f8e839f321047fa210e0" exitCode=0 Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.026732 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20760e60-b469-4023-aacf-0fa14629a665","Type":"ContainerDied","Data":"6bb42ebd049344913b8cd887408581aa24cf85099614f8e839f321047fa210e0"} Jan 26 19:18:07 crc kubenswrapper[4787]: E0126 19:18:07.339290 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 19:18:07 crc kubenswrapper[4787]: E0126 19:18:07.340883 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.350165 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:07 crc kubenswrapper[4787]: E0126 19:18:07.351577 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 19:18:07 crc kubenswrapper[4787]: E0126 19:18:07.351624 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" containerName="nova-cell1-conductor-conductor" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.444723 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-config-data\") pod \"20760e60-b469-4023-aacf-0fa14629a665\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.444774 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-combined-ca-bundle\") pod \"20760e60-b469-4023-aacf-0fa14629a665\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.444968 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsjhv\" (UniqueName: \"kubernetes.io/projected/20760e60-b469-4023-aacf-0fa14629a665-kube-api-access-dsjhv\") pod \"20760e60-b469-4023-aacf-0fa14629a665\" (UID: \"20760e60-b469-4023-aacf-0fa14629a665\") " Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.454676 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20760e60-b469-4023-aacf-0fa14629a665-kube-api-access-dsjhv" (OuterVolumeSpecName: "kube-api-access-dsjhv") pod "20760e60-b469-4023-aacf-0fa14629a665" (UID: "20760e60-b469-4023-aacf-0fa14629a665"). InnerVolumeSpecName "kube-api-access-dsjhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.488336 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20760e60-b469-4023-aacf-0fa14629a665" (UID: "20760e60-b469-4023-aacf-0fa14629a665"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.494390 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-config-data" (OuterVolumeSpecName: "config-data") pod "20760e60-b469-4023-aacf-0fa14629a665" (UID: "20760e60-b469-4023-aacf-0fa14629a665"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.547051 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.547083 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20760e60-b469-4023-aacf-0fa14629a665-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.547102 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsjhv\" (UniqueName: \"kubernetes.io/projected/20760e60-b469-4023-aacf-0fa14629a665-kube-api-access-dsjhv\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:07 crc kubenswrapper[4787]: I0126 19:18:07.889043 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.035226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20760e60-b469-4023-aacf-0fa14629a665","Type":"ContainerDied","Data":"a54ec87e4c604cee19f1790ea2a4d0ae04494bd3c8474540eb8312b46ab05049"} Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.035278 4787 scope.go:117] "RemoveContainer" containerID="6bb42ebd049344913b8cd887408581aa24cf85099614f8e839f321047fa210e0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.035300 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.036747 4787 generic.go:334] "Generic (PLEG): container finished" podID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" exitCode=0 Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.036776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e43bc4e3-4624-407d-9d1f-7092e9bd70fe","Type":"ContainerDied","Data":"d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755"} Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.036794 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e43bc4e3-4624-407d-9d1f-7092e9bd70fe","Type":"ContainerDied","Data":"e87b695bd5ddecee04c016f22c72e26d87790577934c03efbd13a146102d06fe"} Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.036848 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.053789 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-combined-ca-bundle\") pod \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.054155 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-config-data\") pod \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.054299 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2r7w\" (UniqueName: \"kubernetes.io/projected/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-kube-api-access-c2r7w\") pod \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\" (UID: \"e43bc4e3-4624-407d-9d1f-7092e9bd70fe\") " Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.059724 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-kube-api-access-c2r7w" (OuterVolumeSpecName: "kube-api-access-c2r7w") pod "e43bc4e3-4624-407d-9d1f-7092e9bd70fe" (UID: "e43bc4e3-4624-407d-9d1f-7092e9bd70fe"). InnerVolumeSpecName "kube-api-access-c2r7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.064176 4787 scope.go:117] "RemoveContainer" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.081399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e43bc4e3-4624-407d-9d1f-7092e9bd70fe" (UID: "e43bc4e3-4624-407d-9d1f-7092e9bd70fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.093243 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.102382 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.104975 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.117534 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.117612 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" containerName="nova-cell0-conductor-conductor" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.122333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-config-data" (OuterVolumeSpecName: "config-data") pod "e43bc4e3-4624-407d-9d1f-7092e9bd70fe" (UID: "e43bc4e3-4624-407d-9d1f-7092e9bd70fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.126811 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142020 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.142481 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20760e60-b469-4023-aacf-0fa14629a665" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142503 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20760e60-b469-4023-aacf-0fa14629a665" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.142523 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerName="dnsmasq-dns" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142529 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerName="dnsmasq-dns" Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.142542 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerName="init" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142549 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerName="init" Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.142562 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" containerName="nova-cell1-conductor-conductor" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142568 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" containerName="nova-cell1-conductor-conductor" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142741 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" containerName="nova-cell1-conductor-conductor" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142766 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d535d7-6dcc-42ce-aba4-00343d280c38" containerName="dnsmasq-dns" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.142785 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20760e60-b469-4023-aacf-0fa14629a665" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.143435 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.147382 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.148969 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.150634 4787 scope.go:117] "RemoveContainer" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" Jan 26 19:18:08 crc kubenswrapper[4787]: E0126 19:18:08.151245 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755\": container with ID starting with d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755 not found: ID does not exist" containerID="d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.151332 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755"} err="failed to get container status \"d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755\": rpc error: code = NotFound desc = could not find container \"d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755\": container with ID starting with d3bb48187a9cb3c58d6dd507c623bde98aa51505f64a3efd167312b8031a3755 not found: ID does not exist" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.156092 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.156117 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.156127 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2r7w\" (UniqueName: \"kubernetes.io/projected/e43bc4e3-4624-407d-9d1f-7092e9bd70fe-kube-api-access-c2r7w\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.262488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.262594 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.262636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvp2c\" (UniqueName: \"kubernetes.io/projected/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-kube-api-access-xvp2c\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.364838 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.364909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.364940 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvp2c\" (UniqueName: \"kubernetes.io/projected/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-kube-api-access-xvp2c\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.368125 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.372341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.380660 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.387273 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvp2c\" (UniqueName: \"kubernetes.io/projected/2834a5ab-22aa-4af2-b7b1-67a35657f0a8-kube-api-access-xvp2c\") pod \"nova-cell1-novncproxy-0\" (UID: \"2834a5ab-22aa-4af2-b7b1-67a35657f0a8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.390674 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.399269 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.400607 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.404868 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.410776 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.469038 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.568898 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.569372 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kpr\" (UniqueName: \"kubernetes.io/projected/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-kube-api-access-62kpr\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.569428 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.671443 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.671582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.671632 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kpr\" (UniqueName: \"kubernetes.io/projected/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-kube-api-access-62kpr\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.680012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.685154 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.693906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kpr\" (UniqueName: \"kubernetes.io/projected/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-kube-api-access-62kpr\") pod \"nova-cell1-conductor-0\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.727568 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.909890 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.924093 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.924515 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.992883 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.68:8774/\": read tcp 10.217.0.2:42330->10.217.1.68:8774: read: connection reset by peer" Jan 26 19:18:08 crc kubenswrapper[4787]: I0126 19:18:08.993012 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.68:8774/\": read tcp 10.217.0.2:42314->10.217.1.68:8774: read: connection reset by peer" Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.049203 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2834a5ab-22aa-4af2-b7b1-67a35657f0a8","Type":"ContainerStarted","Data":"74049c2074d91273ab6fa951f420fdc80c1b88b258ce879eda9374cfe0645d75"} Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.149936 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:18:09 crc kubenswrapper[4787]: W0126 19:18:09.155700 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f27b8fd_ca02_4e6a_8bd6_601ef4cc09e1.slice/crio-fb0a4ba19848ec78cba06451d5ab5bc326417ebccc9cb2c696fa5ee4b7d6138b WatchSource:0}: Error finding container fb0a4ba19848ec78cba06451d5ab5bc326417ebccc9cb2c696fa5ee4b7d6138b: Status 404 returned error can't find the container with id fb0a4ba19848ec78cba06451d5ab5bc326417ebccc9cb2c696fa5ee4b7d6138b Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.170577 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.67:8775/\": read tcp 10.217.0.2:51660->10.217.1.67:8775: read: connection reset by peer" Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.170905 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.67:8775/\": read tcp 10.217.0.2:51676->10.217.1.67:8775: read: connection reset by peer" Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.600648 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20760e60-b469-4023-aacf-0fa14629a665" path="/var/lib/kubelet/pods/20760e60-b469-4023-aacf-0fa14629a665/volumes" Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.601800 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43bc4e3-4624-407d-9d1f-7092e9bd70fe" path="/var/lib/kubelet/pods/e43bc4e3-4624-407d-9d1f-7092e9bd70fe/volumes" Jan 26 19:18:09 crc kubenswrapper[4787]: I0126 19:18:09.975446 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xfqgs" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="registry-server" probeResult="failure" output=< Jan 26 19:18:09 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 19:18:09 crc kubenswrapper[4787]: > Jan 26 19:18:10 crc kubenswrapper[4787]: I0126 19:18:10.061940 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1","Type":"ContainerStarted","Data":"fb0a4ba19848ec78cba06451d5ab5bc326417ebccc9cb2c696fa5ee4b7d6138b"} Jan 26 19:18:10 crc kubenswrapper[4787]: E0126 19:18:10.705741 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:18:10 crc kubenswrapper[4787]: E0126 19:18:10.707724 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:18:10 crc kubenswrapper[4787]: E0126 19:18:10.712478 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:18:10 crc kubenswrapper[4787]: E0126 19:18:10.712542 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerName="nova-scheduler-scheduler" Jan 26 19:18:10 crc kubenswrapper[4787]: I0126 19:18:10.922477 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.027552 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data\") pod \"d2ab3b41-2361-43af-9b1d-821e95213021\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.028024 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-combined-ca-bundle\") pod \"d2ab3b41-2361-43af-9b1d-821e95213021\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.028178 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ab3b41-2361-43af-9b1d-821e95213021-logs\") pod \"d2ab3b41-2361-43af-9b1d-821e95213021\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.028281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czzp8\" (UniqueName: \"kubernetes.io/projected/d2ab3b41-2361-43af-9b1d-821e95213021-kube-api-access-czzp8\") pod \"d2ab3b41-2361-43af-9b1d-821e95213021\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.029678 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ab3b41-2361-43af-9b1d-821e95213021-logs" (OuterVolumeSpecName: "logs") pod "d2ab3b41-2361-43af-9b1d-821e95213021" (UID: "d2ab3b41-2361-43af-9b1d-821e95213021"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.036873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ab3b41-2361-43af-9b1d-821e95213021-kube-api-access-czzp8" (OuterVolumeSpecName: "kube-api-access-czzp8") pod "d2ab3b41-2361-43af-9b1d-821e95213021" (UID: "d2ab3b41-2361-43af-9b1d-821e95213021"). InnerVolumeSpecName "kube-api-access-czzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.058252 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data podName:d2ab3b41-2361-43af-9b1d-821e95213021 nodeName:}" failed. No retries permitted until 2026-01-26 19:18:11.558222907 +0000 UTC m=+5660.265359040 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data") pod "d2ab3b41-2361-43af-9b1d-821e95213021" (UID: "d2ab3b41-2361-43af-9b1d-821e95213021") : error deleting /var/lib/kubelet/pods/d2ab3b41-2361-43af-9b1d-821e95213021/volume-subpaths: remove /var/lib/kubelet/pods/d2ab3b41-2361-43af-9b1d-821e95213021/volume-subpaths: no such file or directory Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.061074 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2ab3b41-2361-43af-9b1d-821e95213021" (UID: "d2ab3b41-2361-43af-9b1d-821e95213021"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.077758 4787 generic.go:334] "Generic (PLEG): container finished" podID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerID="65f0fed7bce90085493a3b6dfc65c9510ac71076c0e5fd4b2e999fb6ef4d1bbe" exitCode=0 Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.077842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"110a4e2b-0431-43a0-a95d-7038e00e787a","Type":"ContainerDied","Data":"65f0fed7bce90085493a3b6dfc65c9510ac71076c0e5fd4b2e999fb6ef4d1bbe"} Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.087447 4787 generic.go:334] "Generic (PLEG): container finished" podID="d2ab3b41-2361-43af-9b1d-821e95213021" containerID="5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399" exitCode=0 Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.087497 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2ab3b41-2361-43af-9b1d-821e95213021","Type":"ContainerDied","Data":"5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399"} Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.087550 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2ab3b41-2361-43af-9b1d-821e95213021","Type":"ContainerDied","Data":"f01a620a08f2bb68b811b3bf564aac24f959d7d51882de09a83426a9aa31f4b4"} Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.087572 4787 scope.go:117] "RemoveContainer" containerID="5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.087511 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.089938 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1","Type":"ContainerStarted","Data":"54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d"} Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.090146 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.099287 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2834a5ab-22aa-4af2-b7b1-67a35657f0a8","Type":"ContainerStarted","Data":"edac97a4f5b727cc8bd4c71b508eae73204400e7c5daf9174795ade9ba72e5a0"} Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.101657 4787 generic.go:334] "Generic (PLEG): container finished" podID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" containerID="c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6" exitCode=0 Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.101690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd","Type":"ContainerDied","Data":"c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6"} Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.117068 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.117048086 podStartE2EDuration="3.117048086s" podCreationTimestamp="2026-01-26 19:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:11.109627106 +0000 UTC m=+5659.816763239" watchObservedRunningTime="2026-01-26 19:18:11.117048086 +0000 UTC m=+5659.824184219" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.130395 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.130439 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ab3b41-2361-43af-9b1d-821e95213021-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.130454 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czzp8\" (UniqueName: \"kubernetes.io/projected/d2ab3b41-2361-43af-9b1d-821e95213021-kube-api-access-czzp8\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.139086 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.139064802 podStartE2EDuration="3.139064802s" podCreationTimestamp="2026-01-26 19:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:11.134804078 +0000 UTC m=+5659.841940211" watchObservedRunningTime="2026-01-26 19:18:11.139064802 +0000 UTC m=+5659.846200935" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.166327 4787 scope.go:117] "RemoveContainer" containerID="05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.204825 4787 scope.go:117] "RemoveContainer" containerID="5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.205303 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399\": container with ID starting with 5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399 not found: ID does not exist" containerID="5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.205342 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399"} err="failed to get container status \"5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399\": rpc error: code = NotFound desc = could not find container \"5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399\": container with ID starting with 5f22c4d3f3d987c9f42648bb2771ce43d7a26fe17d3b056a223e39563cda1399 not found: ID does not exist" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.205372 4787 scope.go:117] "RemoveContainer" containerID="05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.205580 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0\": container with ID starting with 05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0 not found: ID does not exist" containerID="05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.205602 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0"} err="failed to get container status \"05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0\": rpc error: code = NotFound desc = could not find container \"05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0\": container with ID starting with 05fc5031cd72c045335616ecb1fccf43546d145ff2119bb1da744fc5623c6dd0 not found: ID does not exist" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.350816 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.358737 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.436757 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-config-data\") pod \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.436830 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/110a4e2b-0431-43a0-a95d-7038e00e787a-logs\") pod \"110a4e2b-0431-43a0-a95d-7038e00e787a\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.436868 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-combined-ca-bundle\") pod \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.436926 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-combined-ca-bundle\") pod \"110a4e2b-0431-43a0-a95d-7038e00e787a\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.437067 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-config-data\") pod \"110a4e2b-0431-43a0-a95d-7038e00e787a\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.437104 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkkvg\" (UniqueName: \"kubernetes.io/projected/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-kube-api-access-qkkvg\") pod \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\" (UID: \"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.437135 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx8j8\" (UniqueName: \"kubernetes.io/projected/110a4e2b-0431-43a0-a95d-7038e00e787a-kube-api-access-mx8j8\") pod \"110a4e2b-0431-43a0-a95d-7038e00e787a\" (UID: \"110a4e2b-0431-43a0-a95d-7038e00e787a\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.438233 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110a4e2b-0431-43a0-a95d-7038e00e787a-logs" (OuterVolumeSpecName: "logs") pod "110a4e2b-0431-43a0-a95d-7038e00e787a" (UID: "110a4e2b-0431-43a0-a95d-7038e00e787a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.439594 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/110a4e2b-0431-43a0-a95d-7038e00e787a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.449697 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-kube-api-access-qkkvg" (OuterVolumeSpecName: "kube-api-access-qkkvg") pod "7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" (UID: "7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd"). InnerVolumeSpecName "kube-api-access-qkkvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.468294 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110a4e2b-0431-43a0-a95d-7038e00e787a-kube-api-access-mx8j8" (OuterVolumeSpecName: "kube-api-access-mx8j8") pod "110a4e2b-0431-43a0-a95d-7038e00e787a" (UID: "110a4e2b-0431-43a0-a95d-7038e00e787a"). InnerVolumeSpecName "kube-api-access-mx8j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.473138 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" (UID: "7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.494385 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "110a4e2b-0431-43a0-a95d-7038e00e787a" (UID: "110a4e2b-0431-43a0-a95d-7038e00e787a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.494614 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-config-data" (OuterVolumeSpecName: "config-data") pod "110a4e2b-0431-43a0-a95d-7038e00e787a" (UID: "110a4e2b-0431-43a0-a95d-7038e00e787a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.501699 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-config-data" (OuterVolumeSpecName: "config-data") pod "7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" (UID: "7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.544694 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.544731 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.544743 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.544752 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/110a4e2b-0431-43a0-a95d-7038e00e787a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.544760 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkkvg\" (UniqueName: \"kubernetes.io/projected/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd-kube-api-access-qkkvg\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.544770 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx8j8\" (UniqueName: \"kubernetes.io/projected/110a4e2b-0431-43a0-a95d-7038e00e787a-kube-api-access-mx8j8\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.645897 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data\") pod \"d2ab3b41-2361-43af-9b1d-821e95213021\" (UID: \"d2ab3b41-2361-43af-9b1d-821e95213021\") " Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.649339 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data" (OuterVolumeSpecName: "config-data") pod "d2ab3b41-2361-43af-9b1d-821e95213021" (UID: "d2ab3b41-2361-43af-9b1d-821e95213021"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.729555 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.740469 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.748541 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ab3b41-2361-43af-9b1d-821e95213021-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.750257 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.750847 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-log" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.750865 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-log" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.750913 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-api" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.750921 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-api" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.750935 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-metadata" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.750941 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-metadata" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.750990 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-log" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.750998 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-log" Jan 26 19:18:11 crc kubenswrapper[4787]: E0126 19:18:11.751007 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" containerName="nova-cell0-conductor-conductor" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.751015 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" containerName="nova-cell0-conductor-conductor" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.751210 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-log" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.751235 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" containerName="nova-metadata-metadata" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.751245 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" containerName="nova-cell0-conductor-conductor" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.751253 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-log" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.751264 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" containerName="nova-api-api" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.752208 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.757098 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.767118 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.850218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.850316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwjv\" (UniqueName: \"kubernetes.io/projected/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-kube-api-access-ggwjv\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.850370 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-config-data\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.850394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-logs\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.951753 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.951882 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwjv\" (UniqueName: \"kubernetes.io/projected/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-kube-api-access-ggwjv\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.951929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-config-data\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.951965 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-logs\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.952426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-logs\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.961702 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.961747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-config-data\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:11 crc kubenswrapper[4787]: I0126 19:18:11.973819 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwjv\" (UniqueName: \"kubernetes.io/projected/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-kube-api-access-ggwjv\") pod \"nova-metadata-0\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " pod="openstack/nova-metadata-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.071867 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.115286 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"110a4e2b-0431-43a0-a95d-7038e00e787a","Type":"ContainerDied","Data":"5957912fc3d6d0cdf38a35efeed754a63efcf5114a48f29eeed5e090de6e6437"} Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.115335 4787 scope.go:117] "RemoveContainer" containerID="65f0fed7bce90085493a3b6dfc65c9510ac71076c0e5fd4b2e999fb6ef4d1bbe" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.115466 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.142113 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.142489 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd","Type":"ContainerDied","Data":"e9697129d75e6a7f0243098146a754fea808030efe2f38f3a869fc6bb982910a"} Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.191211 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.202498 4787 scope.go:117] "RemoveContainer" containerID="95a740208506e3f242e34e12256e4d08213c6e72ebd02840d79f7499064972d9" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.206430 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.231035 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.241960 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.254321 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.255664 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.259582 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.288519 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.316312 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.318107 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.321682 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.322009 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.340175 4787 scope.go:117] "RemoveContainer" containerID="c9c393dba8c1465ce0724b762adc35d624da435f9c0a1951c5b0a8290424cfe6" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.359315 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.359359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.360471 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlmw\" (UniqueName: \"kubernetes.io/projected/e2031eb5-98c0-4b04-baa1-3e7392198341-kube-api-access-fhlmw\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463601 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463661 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-config-data\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463710 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba7dad7-8960-487f-8626-a73b43620632-logs\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463742 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlmw\" (UniqueName: \"kubernetes.io/projected/e2031eb5-98c0-4b04-baa1-3e7392198341-kube-api-access-fhlmw\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463792 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv9n4\" (UniqueName: \"kubernetes.io/projected/6ba7dad7-8960-487f-8626-a73b43620632-kube-api-access-sv9n4\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.463889 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.473494 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.476817 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.487700 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlmw\" (UniqueName: \"kubernetes.io/projected/e2031eb5-98c0-4b04-baa1-3e7392198341-kube-api-access-fhlmw\") pod \"nova-cell0-conductor-0\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.567342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.567398 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-config-data\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.567421 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba7dad7-8960-487f-8626-a73b43620632-logs\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.567506 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv9n4\" (UniqueName: \"kubernetes.io/projected/6ba7dad7-8960-487f-8626-a73b43620632-kube-api-access-sv9n4\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.570555 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba7dad7-8960-487f-8626-a73b43620632-logs\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.573425 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-config-data\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.573563 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.588135 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv9n4\" (UniqueName: \"kubernetes.io/projected/6ba7dad7-8960-487f-8626-a73b43620632-kube-api-access-sv9n4\") pod \"nova-api-0\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " pod="openstack/nova-api-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.617040 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:12 crc kubenswrapper[4787]: I0126 19:18:12.698139 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.104025 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.158291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e2031eb5-98c0-4b04-baa1-3e7392198341","Type":"ContainerStarted","Data":"d0e1e76439bfb81929a1adc9cd2d7ec65a3d2bfc46bb88872bb53dcba02c0610"} Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.194057 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.470043 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.605122 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110a4e2b-0431-43a0-a95d-7038e00e787a" path="/var/lib/kubelet/pods/110a4e2b-0431-43a0-a95d-7038e00e787a/volumes" Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.605855 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd" path="/var/lib/kubelet/pods/7db5d19e-0efc-49f6-b2b2-76a0a61e0bdd/volumes" Jan 26 19:18:13 crc kubenswrapper[4787]: I0126 19:18:13.606846 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ab3b41-2361-43af-9b1d-821e95213021" path="/var/lib/kubelet/pods/d2ab3b41-2361-43af-9b1d-821e95213021/volumes" Jan 26 19:18:15 crc kubenswrapper[4787]: E0126 19:18:15.707354 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:18:15 crc kubenswrapper[4787]: E0126 19:18:15.709470 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:18:15 crc kubenswrapper[4787]: E0126 19:18:15.710993 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:18:15 crc kubenswrapper[4787]: E0126 19:18:15.711047 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerName="nova-scheduler-scheduler" Jan 26 19:18:16 crc kubenswrapper[4787]: W0126 19:18:16.062636 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621bfbac_f962_44a9_b7dc_0ede7bd2aaa4.slice/crio-b1de05c37e8889d688ca53ac3f0ea906f84931338b38e106588c18cbdd7c0737 WatchSource:0}: Error finding container b1de05c37e8889d688ca53ac3f0ea906f84931338b38e106588c18cbdd7c0737: Status 404 returned error can't find the container with id b1de05c37e8889d688ca53ac3f0ea906f84931338b38e106588c18cbdd7c0737 Jan 26 19:18:16 crc kubenswrapper[4787]: I0126 19:18:16.063423 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:18:16 crc kubenswrapper[4787]: W0126 19:18:16.066699 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba7dad7_8960_487f_8626_a73b43620632.slice/crio-8280c0832e8de5feef50b2c22c069d52ca7d23ec60a0dd7fdefb0290b08a9591 WatchSource:0}: Error finding container 8280c0832e8de5feef50b2c22c069d52ca7d23ec60a0dd7fdefb0290b08a9591: Status 404 returned error can't find the container with id 8280c0832e8de5feef50b2c22c069d52ca7d23ec60a0dd7fdefb0290b08a9591 Jan 26 19:18:16 crc kubenswrapper[4787]: I0126 19:18:16.182409 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4","Type":"ContainerStarted","Data":"b1de05c37e8889d688ca53ac3f0ea906f84931338b38e106588c18cbdd7c0737"} Jan 26 19:18:16 crc kubenswrapper[4787]: I0126 19:18:16.184276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ba7dad7-8960-487f-8626-a73b43620632","Type":"ContainerStarted","Data":"8280c0832e8de5feef50b2c22c069d52ca7d23ec60a0dd7fdefb0290b08a9591"} Jan 26 19:18:17 crc kubenswrapper[4787]: I0126 19:18:17.195434 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e2031eb5-98c0-4b04-baa1-3e7392198341","Type":"ContainerStarted","Data":"f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007"} Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.204667 4787 generic.go:334] "Generic (PLEG): container finished" podID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" exitCode=0 Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.205046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67c45a4-ee2a-4de5-bbbd-214282bc8074","Type":"ContainerDied","Data":"1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f"} Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.208449 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ba7dad7-8960-487f-8626-a73b43620632","Type":"ContainerStarted","Data":"f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3"} Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.208504 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ba7dad7-8960-487f-8626-a73b43620632","Type":"ContainerStarted","Data":"28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19"} Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.210728 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4","Type":"ContainerStarted","Data":"aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7"} Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.210784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4","Type":"ContainerStarted","Data":"7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca"} Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.210860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.229985 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.229970605 podStartE2EDuration="6.229970605s" podCreationTimestamp="2026-01-26 19:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:18.228625662 +0000 UTC m=+5666.935761795" watchObservedRunningTime="2026-01-26 19:18:18.229970605 +0000 UTC m=+5666.937106738" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.256509 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.256489939 podStartE2EDuration="6.256489939s" podCreationTimestamp="2026-01-26 19:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:18.248773221 +0000 UTC m=+5666.955909374" watchObservedRunningTime="2026-01-26 19:18:18.256489939 +0000 UTC m=+5666.963626072" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.469235 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.479469 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.771246 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.914048 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:18:18 crc kubenswrapper[4787]: I0126 19:18:18.975427 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.006587 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xclk7\" (UniqueName: \"kubernetes.io/projected/e67c45a4-ee2a-4de5-bbbd-214282bc8074-kube-api-access-xclk7\") pod \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.006625 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-config-data\") pod \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.006729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-combined-ca-bundle\") pod \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\" (UID: \"e67c45a4-ee2a-4de5-bbbd-214282bc8074\") " Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.016746 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67c45a4-ee2a-4de5-bbbd-214282bc8074-kube-api-access-xclk7" (OuterVolumeSpecName: "kube-api-access-xclk7") pod "e67c45a4-ee2a-4de5-bbbd-214282bc8074" (UID: "e67c45a4-ee2a-4de5-bbbd-214282bc8074"). InnerVolumeSpecName "kube-api-access-xclk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.027212 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.032447 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e67c45a4-ee2a-4de5-bbbd-214282bc8074" (UID: "e67c45a4-ee2a-4de5-bbbd-214282bc8074"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.034664 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-config-data" (OuterVolumeSpecName: "config-data") pod "e67c45a4-ee2a-4de5-bbbd-214282bc8074" (UID: "e67c45a4-ee2a-4de5-bbbd-214282bc8074"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.108397 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xclk7\" (UniqueName: \"kubernetes.io/projected/e67c45a4-ee2a-4de5-bbbd-214282bc8074-kube-api-access-xclk7\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.108439 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.108449 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67c45a4-ee2a-4de5-bbbd-214282bc8074-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.211427 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xfqgs"] Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.220386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67c45a4-ee2a-4de5-bbbd-214282bc8074","Type":"ContainerDied","Data":"0cf134ec57fb758b22aa908b76107a75814e177087c0d4be58713dbf06a41832"} Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.220726 4787 scope.go:117] "RemoveContainer" containerID="1c913db43e9dd50a4bb9ab640f9342460452ce6c9bc6e3709e5627e0f011ad9f" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.220826 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.237192 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.261637 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=8.26162216 podStartE2EDuration="8.26162216s" podCreationTimestamp="2026-01-26 19:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:19.245388675 +0000 UTC m=+5667.952524808" watchObservedRunningTime="2026-01-26 19:18:19.26162216 +0000 UTC m=+5667.968758293" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.300663 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.324576 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.341396 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:18:19 crc kubenswrapper[4787]: E0126 19:18:19.341811 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerName="nova-scheduler-scheduler" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.341823 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerName="nova-scheduler-scheduler" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.342000 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" containerName="nova-scheduler-scheduler" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.342597 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.345044 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.353930 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.413350 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-config-data\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.413514 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.413625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt42d\" (UniqueName: \"kubernetes.io/projected/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-kube-api-access-rt42d\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.515001 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.515138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt42d\" (UniqueName: \"kubernetes.io/projected/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-kube-api-access-rt42d\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.515163 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-config-data\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.519293 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-config-data\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.537302 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.538537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt42d\" (UniqueName: \"kubernetes.io/projected/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-kube-api-access-rt42d\") pod \"nova-scheduler-0\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " pod="openstack/nova-scheduler-0" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.601312 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67c45a4-ee2a-4de5-bbbd-214282bc8074" path="/var/lib/kubelet/pods/e67c45a4-ee2a-4de5-bbbd-214282bc8074/volumes" Jan 26 19:18:19 crc kubenswrapper[4787]: I0126 19:18:19.663591 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:18:20 crc kubenswrapper[4787]: I0126 19:18:20.075178 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:18:20 crc kubenswrapper[4787]: W0126 19:18:20.078268 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd70355a6_3daa_42e6_8b3d_7bedf5d20cfe.slice/crio-ba3490a6be720a6512190cdb4b5535877476701c5a0cd19e6bb5377bb79122e1 WatchSource:0}: Error finding container ba3490a6be720a6512190cdb4b5535877476701c5a0cd19e6bb5377bb79122e1: Status 404 returned error can't find the container with id ba3490a6be720a6512190cdb4b5535877476701c5a0cd19e6bb5377bb79122e1 Jan 26 19:18:20 crc kubenswrapper[4787]: I0126 19:18:20.232322 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe","Type":"ContainerStarted","Data":"ba3490a6be720a6512190cdb4b5535877476701c5a0cd19e6bb5377bb79122e1"} Jan 26 19:18:20 crc kubenswrapper[4787]: I0126 19:18:20.232506 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xfqgs" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="registry-server" containerID="cri-o://94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8" gracePeriod=2 Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.224809 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.250741 4787 generic.go:334] "Generic (PLEG): container finished" podID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerID="94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8" exitCode=0 Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.250803 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerDied","Data":"94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8"} Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.250833 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfqgs" event={"ID":"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d","Type":"ContainerDied","Data":"40ff67ae7450096595af800588a064eb31d09a5ee1d99ddd92886149643143ec"} Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.250848 4787 scope.go:117] "RemoveContainer" containerID="94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.250980 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfqgs" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.254840 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe","Type":"ContainerStarted","Data":"829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65"} Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.277917 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.277896578 podStartE2EDuration="2.277896578s" podCreationTimestamp="2026-01-26 19:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:21.272127378 +0000 UTC m=+5669.979263511" watchObservedRunningTime="2026-01-26 19:18:21.277896578 +0000 UTC m=+5669.985032711" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.278309 4787 scope.go:117] "RemoveContainer" containerID="2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.302776 4787 scope.go:117] "RemoveContainer" containerID="b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.341889 4787 scope.go:117] "RemoveContainer" containerID="94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8" Jan 26 19:18:21 crc kubenswrapper[4787]: E0126 19:18:21.342471 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8\": container with ID starting with 94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8 not found: ID does not exist" containerID="94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.342503 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8"} err="failed to get container status \"94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8\": rpc error: code = NotFound desc = could not find container \"94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8\": container with ID starting with 94356d85a4bcc1a42f164bfac3cebbbe71b2d4c89472578832affe5991d0ddb8 not found: ID does not exist" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.342524 4787 scope.go:117] "RemoveContainer" containerID="2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2" Jan 26 19:18:21 crc kubenswrapper[4787]: E0126 19:18:21.342806 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2\": container with ID starting with 2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2 not found: ID does not exist" containerID="2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.342831 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2"} err="failed to get container status \"2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2\": rpc error: code = NotFound desc = could not find container \"2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2\": container with ID starting with 2850a31dec17d031c56987a90d819b7c32c5869f59a6e521fa857f36cd93d9d2 not found: ID does not exist" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.342846 4787 scope.go:117] "RemoveContainer" containerID="b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd" Jan 26 19:18:21 crc kubenswrapper[4787]: E0126 19:18:21.343162 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd\": container with ID starting with b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd not found: ID does not exist" containerID="b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.343185 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd"} err="failed to get container status \"b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd\": rpc error: code = NotFound desc = could not find container \"b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd\": container with ID starting with b40065c665ce806da5e7c57fb49eb5d6738f49a4e38ad789414755a98046e7dd not found: ID does not exist" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.364280 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szg4w\" (UniqueName: \"kubernetes.io/projected/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-kube-api-access-szg4w\") pod \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.364368 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-utilities\") pod \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.364439 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-catalog-content\") pod \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\" (UID: \"aca13ae4-4dd7-4843-9ede-b8366f9f9c7d\") " Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.365360 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-utilities" (OuterVolumeSpecName: "utilities") pod "aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" (UID: "aca13ae4-4dd7-4843-9ede-b8366f9f9c7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.373537 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-kube-api-access-szg4w" (OuterVolumeSpecName: "kube-api-access-szg4w") pod "aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" (UID: "aca13ae4-4dd7-4843-9ede-b8366f9f9c7d"). InnerVolumeSpecName "kube-api-access-szg4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.466210 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.466258 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szg4w\" (UniqueName: \"kubernetes.io/projected/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-kube-api-access-szg4w\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.493964 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" (UID: "aca13ae4-4dd7-4843-9ede-b8366f9f9c7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.567735 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.600756 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xfqgs"] Jan 26 19:18:21 crc kubenswrapper[4787]: I0126 19:18:21.601014 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xfqgs"] Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.073331 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.073980 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.074215 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.074308 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.652781 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.699820 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:18:22 crc kubenswrapper[4787]: I0126 19:18:22.699860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:18:23 crc kubenswrapper[4787]: I0126 19:18:23.155180 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:18:23 crc kubenswrapper[4787]: I0126 19:18:23.155189 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:18:23 crc kubenswrapper[4787]: I0126 19:18:23.607641 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" path="/var/lib/kubelet/pods/aca13ae4-4dd7-4843-9ede-b8366f9f9c7d/volumes" Jan 26 19:18:23 crc kubenswrapper[4787]: I0126 19:18:23.783183 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.80:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:18:23 crc kubenswrapper[4787]: I0126 19:18:23.783183 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.80:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.664720 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.953129 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:24 crc kubenswrapper[4787]: E0126 19:18:24.954006 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="extract-content" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.954026 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="extract-content" Jan 26 19:18:24 crc kubenswrapper[4787]: E0126 19:18:24.954047 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="registry-server" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.954055 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="registry-server" Jan 26 19:18:24 crc kubenswrapper[4787]: E0126 19:18:24.954070 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="extract-utilities" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.954078 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="extract-utilities" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.954282 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca13ae4-4dd7-4843-9ede-b8366f9f9c7d" containerName="registry-server" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.955500 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.958585 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 19:18:24 crc kubenswrapper[4787]: I0126 19:18:24.966861 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.028922 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.028991 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-scripts\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.029135 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927f501f-60d6-42ad-b10c-7c9248a5e73c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.029181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.029207 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.029231 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckzt\" (UniqueName: \"kubernetes.io/projected/927f501f-60d6-42ad-b10c-7c9248a5e73c-kube-api-access-8ckzt\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.130603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.130684 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ckzt\" (UniqueName: \"kubernetes.io/projected/927f501f-60d6-42ad-b10c-7c9248a5e73c-kube-api-access-8ckzt\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.130803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.130845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-scripts\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.130913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927f501f-60d6-42ad-b10c-7c9248a5e73c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.130991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.133906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927f501f-60d6-42ad-b10c-7c9248a5e73c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.138224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.138424 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.138713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-scripts\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.138938 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.152635 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ckzt\" (UniqueName: \"kubernetes.io/projected/927f501f-60d6-42ad-b10c-7c9248a5e73c-kube-api-access-8ckzt\") pod \"cinder-scheduler-0\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.288904 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.809071 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.838738 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.839338 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api-log" containerID="cri-o://5aeaffaf31caa04252f2619ace5fdbc36426851cc2e073d2aead7a0f187f9f8d" gracePeriod=30 Jan 26 19:18:25 crc kubenswrapper[4787]: I0126 19:18:25.839410 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api" containerID="cri-o://af15adcf625083d538337572d78e60abf215805e2c9fb5536556e46f1165c681" gracePeriod=30 Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.325654 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927f501f-60d6-42ad-b10c-7c9248a5e73c","Type":"ContainerStarted","Data":"9fbdd6e73f263c08a37bbdb113819c97e94a21f7e057d542389184f141824f75"} Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.329687 4787 generic.go:334] "Generic (PLEG): container finished" podID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerID="5aeaffaf31caa04252f2619ace5fdbc36426851cc2e073d2aead7a0f187f9f8d" exitCode=143 Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.329731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4","Type":"ContainerDied","Data":"5aeaffaf31caa04252f2619ace5fdbc36426851cc2e073d2aead7a0f187f9f8d"} Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.392163 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.393916 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.396115 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.410195 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454646 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454694 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454720 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454746 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454836 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454853 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454973 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.454990 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.455019 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.455035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3680052c-4070-45ac-8697-4e2050a95201-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.455052 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.455068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqql\" (UniqueName: \"kubernetes.io/projected/3680052c-4070-45ac-8697-4e2050a95201-kube-api-access-5gqql\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.455088 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-run\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.455104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558164 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558190 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558207 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558261 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3680052c-4070-45ac-8697-4e2050a95201-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558279 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558297 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gqql\" (UniqueName: \"kubernetes.io/projected/3680052c-4070-45ac-8697-4e2050a95201-kube-api-access-5gqql\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558315 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-run\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558366 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558387 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558460 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.558478 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.559071 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.559183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.559916 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.559976 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-run\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.560022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.560058 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.560092 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.560274 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.560401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3680052c-4070-45ac-8697-4e2050a95201-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.563835 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.566663 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.567651 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.575706 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3680052c-4070-45ac-8697-4e2050a95201-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.580459 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3680052c-4070-45ac-8697-4e2050a95201-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.601671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gqql\" (UniqueName: \"kubernetes.io/projected/3680052c-4070-45ac-8697-4e2050a95201-kube-api-access-5gqql\") pod \"cinder-volume-volume1-0\" (UID: \"3680052c-4070-45ac-8697-4e2050a95201\") " pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:26 crc kubenswrapper[4787]: I0126 19:18:26.827780 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:27 crc kubenswrapper[4787]: W0126 19:18:27.270546 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3680052c_4070_45ac_8697_4e2050a95201.slice/crio-75c712a38703aebb50d7dba049dcc1205a4bb031a2cf050f10e151f6cde4b84f WatchSource:0}: Error finding container 75c712a38703aebb50d7dba049dcc1205a4bb031a2cf050f10e151f6cde4b84f: Status 404 returned error can't find the container with id 75c712a38703aebb50d7dba049dcc1205a4bb031a2cf050f10e151f6cde4b84f Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.275508 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.339324 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927f501f-60d6-42ad-b10c-7c9248a5e73c","Type":"ContainerStarted","Data":"ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc"} Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.340143 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3680052c-4070-45ac-8697-4e2050a95201","Type":"ContainerStarted","Data":"75c712a38703aebb50d7dba049dcc1205a4bb031a2cf050f10e151f6cde4b84f"} Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.640330 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.666165 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.669129 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.688097 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837335 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-lib-modules\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837445 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-nvme\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837463 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837763 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-scripts\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.837988 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/69ef81f2-db43-4004-abdc-c34eacc8a2ae-ceph\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838130 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-dev\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838160 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-sys\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838193 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838254 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-run\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-config-data-custom\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838321 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxn2\" (UniqueName: \"kubernetes.io/projected/69ef81f2-db43-4004-abdc-c34eacc8a2ae-kube-api-access-qcxn2\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.838359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-config-data\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940078 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-nvme\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940148 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940273 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-scripts\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940396 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/69ef81f2-db43-4004-abdc-c34eacc8a2ae-ceph\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-dev\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940438 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-sys\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940474 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-run\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940519 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-config-data-custom\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940568 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxn2\" (UniqueName: \"kubernetes.io/projected/69ef81f2-db43-4004-abdc-c34eacc8a2ae-kube-api-access-qcxn2\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-config-data\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-lib-modules\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940812 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.940923 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.941660 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-nvme\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.941712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.941736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.943077 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.943139 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-dev\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.943164 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-sys\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.943211 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-run\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.943390 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-lib-modules\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.943894 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ef81f2-db43-4004-abdc-c34eacc8a2ae-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.947924 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-config-data-custom\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.948754 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/69ef81f2-db43-4004-abdc-c34eacc8a2ae-ceph\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.949254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.950506 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-config-data\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.951198 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ef81f2-db43-4004-abdc-c34eacc8a2ae-scripts\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:27 crc kubenswrapper[4787]: I0126 19:18:27.965585 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxn2\" (UniqueName: \"kubernetes.io/projected/69ef81f2-db43-4004-abdc-c34eacc8a2ae-kube-api-access-qcxn2\") pod \"cinder-backup-0\" (UID: \"69ef81f2-db43-4004-abdc-c34eacc8a2ae\") " pod="openstack/cinder-backup-0" Jan 26 19:18:28 crc kubenswrapper[4787]: I0126 19:18:28.000865 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 19:18:28 crc kubenswrapper[4787]: I0126 19:18:28.362094 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927f501f-60d6-42ad-b10c-7c9248a5e73c","Type":"ContainerStarted","Data":"191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3"} Jan 26 19:18:28 crc kubenswrapper[4787]: I0126 19:18:28.389308 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.389287419 podStartE2EDuration="4.389287419s" podCreationTimestamp="2026-01-26 19:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:28.384849562 +0000 UTC m=+5677.091985695" watchObservedRunningTime="2026-01-26 19:18:28.389287419 +0000 UTC m=+5677.096423542" Jan 26 19:18:28 crc kubenswrapper[4787]: I0126 19:18:28.989018 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.74:8776/healthcheck\": read tcp 10.217.0.2:39016->10.217.1.74:8776: read: connection reset by peer" Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.187448 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.373892 4787 generic.go:334] "Generic (PLEG): container finished" podID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerID="af15adcf625083d538337572d78e60abf215805e2c9fb5536556e46f1165c681" exitCode=0 Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.373971 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4","Type":"ContainerDied","Data":"af15adcf625083d538337572d78e60abf215805e2c9fb5536556e46f1165c681"} Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.375711 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"69ef81f2-db43-4004-abdc-c34eacc8a2ae","Type":"ContainerStarted","Data":"b00af5dc0272d60c5e2f4c6ef26f95e93aa79445f8da2d26fcacc12b210043a3"} Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.377594 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3680052c-4070-45ac-8697-4e2050a95201","Type":"ContainerStarted","Data":"5768d1808e8af6a8bcbb3aedfaa8c6fd8d1a6541f5fce45c18d73696a2b537db"} Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.665821 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 19:18:29 crc kubenswrapper[4787]: I0126 19:18:29.779757 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.163479 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.205844 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-scripts\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.205902 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-combined-ca-bundle\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.206019 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.206048 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data-custom\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.206123 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqqf4\" (UniqueName: \"kubernetes.io/projected/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-kube-api-access-tqqf4\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.206203 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-logs\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.206236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-etc-machine-id\") pod \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\" (UID: \"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4\") " Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.206782 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.210208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-logs" (OuterVolumeSpecName: "logs") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.218058 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.219031 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-kube-api-access-tqqf4" (OuterVolumeSpecName: "kube-api-access-tqqf4") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "kube-api-access-tqqf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.231519 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-scripts" (OuterVolumeSpecName: "scripts") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.280853 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data" (OuterVolumeSpecName: "config-data") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.290708 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" (UID: "1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.290835 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309104 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309245 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309259 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309277 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309288 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqqf4\" (UniqueName: \"kubernetes.io/projected/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-kube-api-access-tqqf4\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309325 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.309336 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.388890 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4","Type":"ContainerDied","Data":"72c2c2b6b656c5e87f86b7af85dea628e0f83f0b1236539cec34a8af53606fab"} Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.388969 4787 scope.go:117] "RemoveContainer" containerID="af15adcf625083d538337572d78e60abf215805e2c9fb5536556e46f1165c681" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.389123 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.396159 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3680052c-4070-45ac-8697-4e2050a95201","Type":"ContainerStarted","Data":"453bd92618236043b639e9f7b66e0433a1fef45a7251c0ded9f1a3277d22f47e"} Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.427068 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.9040372100000003 podStartE2EDuration="4.427051509s" podCreationTimestamp="2026-01-26 19:18:26 +0000 UTC" firstStartedPulling="2026-01-26 19:18:27.272434413 +0000 UTC m=+5675.979570546" lastFinishedPulling="2026-01-26 19:18:28.795448702 +0000 UTC m=+5677.502584845" observedRunningTime="2026-01-26 19:18:30.424262312 +0000 UTC m=+5679.131398465" watchObservedRunningTime="2026-01-26 19:18:30.427051509 +0000 UTC m=+5679.134187642" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.432980 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.478312 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.490007 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.500059 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:18:30 crc kubenswrapper[4787]: E0126 19:18:30.500599 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.500618 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api" Jan 26 19:18:30 crc kubenswrapper[4787]: E0126 19:18:30.500649 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api-log" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.500658 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api-log" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.500851 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api-log" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.500880 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" containerName="cinder-api" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.502085 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.510387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.513766 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.513869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-config-data\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.513962 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-scripts\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.513995 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-config-data-custom\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.514018 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpmr\" (UniqueName: \"kubernetes.io/projected/0dfa3018-6648-4c71-8640-3c888b057c57-kube-api-access-gwpmr\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.514046 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dfa3018-6648-4c71-8640-3c888b057c57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.514059 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfa3018-6648-4c71-8640-3c888b057c57-logs\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.543189 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.615701 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-scripts\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.615760 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-config-data-custom\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.615781 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwpmr\" (UniqueName: \"kubernetes.io/projected/0dfa3018-6648-4c71-8640-3c888b057c57-kube-api-access-gwpmr\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.615806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dfa3018-6648-4c71-8640-3c888b057c57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.615820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfa3018-6648-4c71-8640-3c888b057c57-logs\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.615992 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dfa3018-6648-4c71-8640-3c888b057c57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.616526 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfa3018-6648-4c71-8640-3c888b057c57-logs\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.617732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.617782 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-config-data\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.621657 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-config-data-custom\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.624350 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-scripts\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.625384 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.633139 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwpmr\" (UniqueName: \"kubernetes.io/projected/0dfa3018-6648-4c71-8640-3c888b057c57-kube-api-access-gwpmr\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.636430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfa3018-6648-4c71-8640-3c888b057c57-config-data\") pod \"cinder-api-0\" (UID: \"0dfa3018-6648-4c71-8640-3c888b057c57\") " pod="openstack/cinder-api-0" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.797321 4787 scope.go:117] "RemoveContainer" containerID="5aeaffaf31caa04252f2619ace5fdbc36426851cc2e073d2aead7a0f187f9f8d" Jan 26 19:18:30 crc kubenswrapper[4787]: I0126 19:18:30.824757 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:18:31 crc kubenswrapper[4787]: I0126 19:18:31.420146 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"69ef81f2-db43-4004-abdc-c34eacc8a2ae","Type":"ContainerStarted","Data":"f1d975e9e0df4a988ebdb43355f5edb278ea99b2059e81bfbc94281c2d7bb986"} Jan 26 19:18:31 crc kubenswrapper[4787]: I0126 19:18:31.462129 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:18:31 crc kubenswrapper[4787]: I0126 19:18:31.605755 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4" path="/var/lib/kubelet/pods/1c5a6ae7-830e-4eb4-bc70-cb12c45f76e4/volumes" Jan 26 19:18:31 crc kubenswrapper[4787]: I0126 19:18:31.828136 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.086356 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.090264 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.096630 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.457205 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0dfa3018-6648-4c71-8640-3c888b057c57","Type":"ContainerStarted","Data":"721c0d84f7256a7a26c5fb87b74f4e9f8cd0a32902127e065fb179468c913ee5"} Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.457252 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0dfa3018-6648-4c71-8640-3c888b057c57","Type":"ContainerStarted","Data":"ad566b51a85176dd1466e2e2eff34801fb20aad3107635d58814bbd2932c56d2"} Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.513742 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"69ef81f2-db43-4004-abdc-c34eacc8a2ae","Type":"ContainerStarted","Data":"4266fc12186306a0d2c53e01d7008694daea946c9ced75c1efa35f2c378b2358"} Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.550969 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.572438 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.96396207 podStartE2EDuration="5.572417506s" podCreationTimestamp="2026-01-26 19:18:27 +0000 UTC" firstStartedPulling="2026-01-26 19:18:29.191332674 +0000 UTC m=+5677.898468807" lastFinishedPulling="2026-01-26 19:18:30.7997881 +0000 UTC m=+5679.506924243" observedRunningTime="2026-01-26 19:18:32.565422806 +0000 UTC m=+5681.272558949" watchObservedRunningTime="2026-01-26 19:18:32.572417506 +0000 UTC m=+5681.279553639" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.706926 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.708446 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.708475 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:18:32 crc kubenswrapper[4787]: I0126 19:18:32.727530 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:18:33 crc kubenswrapper[4787]: I0126 19:18:33.002683 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 19:18:33 crc kubenswrapper[4787]: I0126 19:18:33.525007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0dfa3018-6648-4c71-8640-3c888b057c57","Type":"ContainerStarted","Data":"461aff8307bc069856dfa05e8ab3b37e5d9d66cb68ac29080caf395325315328"} Jan 26 19:18:33 crc kubenswrapper[4787]: I0126 19:18:33.525698 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:18:33 crc kubenswrapper[4787]: I0126 19:18:33.525742 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 19:18:33 crc kubenswrapper[4787]: I0126 19:18:33.545483 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:18:33 crc kubenswrapper[4787]: I0126 19:18:33.551255 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.551237417 podStartE2EDuration="3.551237417s" podCreationTimestamp="2026-01-26 19:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:33.544073283 +0000 UTC m=+5682.251209416" watchObservedRunningTime="2026-01-26 19:18:33.551237417 +0000 UTC m=+5682.258373550" Jan 26 19:18:35 crc kubenswrapper[4787]: I0126 19:18:35.508346 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 19:18:35 crc kubenswrapper[4787]: I0126 19:18:35.563579 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:35 crc kubenswrapper[4787]: I0126 19:18:35.564152 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="cinder-scheduler" containerID="cri-o://ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc" gracePeriod=30 Jan 26 19:18:35 crc kubenswrapper[4787]: I0126 19:18:35.564251 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="probe" containerID="cri-o://191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3" gracePeriod=30 Jan 26 19:18:36 crc kubenswrapper[4787]: I0126 19:18:36.556094 4787 generic.go:334] "Generic (PLEG): container finished" podID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerID="191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3" exitCode=0 Jan 26 19:18:36 crc kubenswrapper[4787]: I0126 19:18:36.556143 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927f501f-60d6-42ad-b10c-7c9248a5e73c","Type":"ContainerDied","Data":"191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3"} Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.074763 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.136385 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.251869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927f501f-60d6-42ad-b10c-7c9248a5e73c-etc-machine-id\") pod \"927f501f-60d6-42ad-b10c-7c9248a5e73c\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.252308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data\") pod \"927f501f-60d6-42ad-b10c-7c9248a5e73c\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.252331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-scripts\") pod \"927f501f-60d6-42ad-b10c-7c9248a5e73c\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.252396 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data-custom\") pod \"927f501f-60d6-42ad-b10c-7c9248a5e73c\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.252439 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-combined-ca-bundle\") pod \"927f501f-60d6-42ad-b10c-7c9248a5e73c\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.252492 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ckzt\" (UniqueName: \"kubernetes.io/projected/927f501f-60d6-42ad-b10c-7c9248a5e73c-kube-api-access-8ckzt\") pod \"927f501f-60d6-42ad-b10c-7c9248a5e73c\" (UID: \"927f501f-60d6-42ad-b10c-7c9248a5e73c\") " Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.252134 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927f501f-60d6-42ad-b10c-7c9248a5e73c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "927f501f-60d6-42ad-b10c-7c9248a5e73c" (UID: "927f501f-60d6-42ad-b10c-7c9248a5e73c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.258167 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "927f501f-60d6-42ad-b10c-7c9248a5e73c" (UID: "927f501f-60d6-42ad-b10c-7c9248a5e73c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.258813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-scripts" (OuterVolumeSpecName: "scripts") pod "927f501f-60d6-42ad-b10c-7c9248a5e73c" (UID: "927f501f-60d6-42ad-b10c-7c9248a5e73c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.262442 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927f501f-60d6-42ad-b10c-7c9248a5e73c-kube-api-access-8ckzt" (OuterVolumeSpecName: "kube-api-access-8ckzt") pod "927f501f-60d6-42ad-b10c-7c9248a5e73c" (UID: "927f501f-60d6-42ad-b10c-7c9248a5e73c"). InnerVolumeSpecName "kube-api-access-8ckzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.320168 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "927f501f-60d6-42ad-b10c-7c9248a5e73c" (UID: "927f501f-60d6-42ad-b10c-7c9248a5e73c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.354545 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927f501f-60d6-42ad-b10c-7c9248a5e73c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.354588 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.354600 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.354613 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.354626 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ckzt\" (UniqueName: \"kubernetes.io/projected/927f501f-60d6-42ad-b10c-7c9248a5e73c-kube-api-access-8ckzt\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.368579 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data" (OuterVolumeSpecName: "config-data") pod "927f501f-60d6-42ad-b10c-7c9248a5e73c" (UID: "927f501f-60d6-42ad-b10c-7c9248a5e73c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.455969 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927f501f-60d6-42ad-b10c-7c9248a5e73c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.571739 4787 generic.go:334] "Generic (PLEG): container finished" podID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerID="ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc" exitCode=0 Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.571782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927f501f-60d6-42ad-b10c-7c9248a5e73c","Type":"ContainerDied","Data":"ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc"} Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.571813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927f501f-60d6-42ad-b10c-7c9248a5e73c","Type":"ContainerDied","Data":"9fbdd6e73f263c08a37bbdb113819c97e94a21f7e057d542389184f141824f75"} Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.571811 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.571828 4787 scope.go:117] "RemoveContainer" containerID="191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.616451 4787 scope.go:117] "RemoveContainer" containerID="ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.624427 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.626934 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.644666 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:37 crc kubenswrapper[4787]: E0126 19:18:37.645107 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="probe" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.645123 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="probe" Jan 26 19:18:37 crc kubenswrapper[4787]: E0126 19:18:37.645141 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="cinder-scheduler" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.645148 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="cinder-scheduler" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.645310 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="cinder-scheduler" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.645324 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" containerName="probe" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.646464 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.651029 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.665093 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.673422 4787 scope.go:117] "RemoveContainer" containerID="191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3" Jan 26 19:18:37 crc kubenswrapper[4787]: E0126 19:18:37.676572 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3\": container with ID starting with 191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3 not found: ID does not exist" containerID="191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.676628 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3"} err="failed to get container status \"191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3\": rpc error: code = NotFound desc = could not find container \"191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3\": container with ID starting with 191ada62e43fec52eff72ea30d5b12ba4e1f8e2f6c82f88af90e66bcfbf23bf3 not found: ID does not exist" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.676661 4787 scope.go:117] "RemoveContainer" containerID="ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc" Jan 26 19:18:37 crc kubenswrapper[4787]: E0126 19:18:37.677670 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc\": container with ID starting with ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc not found: ID does not exist" containerID="ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.677705 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc"} err="failed to get container status \"ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc\": rpc error: code = NotFound desc = could not find container \"ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc\": container with ID starting with ac2671d8882ae0111c3676c446f072ce7dab70cdcb903d468832468be075b6bc not found: ID does not exist" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.761768 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.761872 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.761923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.762037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvnd\" (UniqueName: \"kubernetes.io/projected/d34a51b6-302e-47bf-8a31-56019455d91f-kube-api-access-fsvnd\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.762462 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d34a51b6-302e-47bf-8a31-56019455d91f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.762654 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.864404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsvnd\" (UniqueName: \"kubernetes.io/projected/d34a51b6-302e-47bf-8a31-56019455d91f-kube-api-access-fsvnd\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.864626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d34a51b6-302e-47bf-8a31-56019455d91f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.864718 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.864796 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d34a51b6-302e-47bf-8a31-56019455d91f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.864821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.865004 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.865130 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.869925 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.871160 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.875114 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.878618 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a51b6-302e-47bf-8a31-56019455d91f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.894386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsvnd\" (UniqueName: \"kubernetes.io/projected/d34a51b6-302e-47bf-8a31-56019455d91f-kube-api-access-fsvnd\") pod \"cinder-scheduler-0\" (UID: \"d34a51b6-302e-47bf-8a31-56019455d91f\") " pod="openstack/cinder-scheduler-0" Jan 26 19:18:37 crc kubenswrapper[4787]: I0126 19:18:37.981321 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:18:38 crc kubenswrapper[4787]: I0126 19:18:38.246776 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 19:18:38 crc kubenswrapper[4787]: I0126 19:18:38.413601 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:18:38 crc kubenswrapper[4787]: W0126 19:18:38.416309 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd34a51b6_302e_47bf_8a31_56019455d91f.slice/crio-01dc43a044b8a35dfff511ecdc2c0e236d2a31194d1b914e39b94f772dbad5bb WatchSource:0}: Error finding container 01dc43a044b8a35dfff511ecdc2c0e236d2a31194d1b914e39b94f772dbad5bb: Status 404 returned error can't find the container with id 01dc43a044b8a35dfff511ecdc2c0e236d2a31194d1b914e39b94f772dbad5bb Jan 26 19:18:38 crc kubenswrapper[4787]: I0126 19:18:38.598647 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d34a51b6-302e-47bf-8a31-56019455d91f","Type":"ContainerStarted","Data":"01dc43a044b8a35dfff511ecdc2c0e236d2a31194d1b914e39b94f772dbad5bb"} Jan 26 19:18:39 crc kubenswrapper[4787]: I0126 19:18:39.689480 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927f501f-60d6-42ad-b10c-7c9248a5e73c" path="/var/lib/kubelet/pods/927f501f-60d6-42ad-b10c-7c9248a5e73c/volumes" Jan 26 19:18:39 crc kubenswrapper[4787]: I0126 19:18:39.706120 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d34a51b6-302e-47bf-8a31-56019455d91f","Type":"ContainerStarted","Data":"b027479daa556cd09c3e377f289ad447d42602b88458be22bc0376cfe10448f2"} Jan 26 19:18:39 crc kubenswrapper[4787]: I0126 19:18:39.706184 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d34a51b6-302e-47bf-8a31-56019455d91f","Type":"ContainerStarted","Data":"28512400ca51b1fba237f53b56dd386775e64f721a5c036ad4fcab45db11f580"} Jan 26 19:18:39 crc kubenswrapper[4787]: I0126 19:18:39.738210 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.738192979 podStartE2EDuration="2.738192979s" podCreationTimestamp="2026-01-26 19:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:18:39.732473009 +0000 UTC m=+5688.439609162" watchObservedRunningTime="2026-01-26 19:18:39.738192979 +0000 UTC m=+5688.445329112" Jan 26 19:18:42 crc kubenswrapper[4787]: I0126 19:18:42.763609 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 19:18:42 crc kubenswrapper[4787]: I0126 19:18:42.981721 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 19:18:48 crc kubenswrapper[4787]: I0126 19:18:48.156310 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 19:19:16 crc kubenswrapper[4787]: I0126 19:19:16.807713 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:19:16 crc kubenswrapper[4787]: I0126 19:19:16.808311 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.427159 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dqjjl"] Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.431223 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.439685 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqjjl"] Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.482396 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-catalog-content\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.482477 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-utilities\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.482645 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5j7n\" (UniqueName: \"kubernetes.io/projected/f71b6dcd-72ea-436a-95f8-70f3ce5af962-kube-api-access-b5j7n\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.584776 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-catalog-content\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.584846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-utilities\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.584905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5j7n\" (UniqueName: \"kubernetes.io/projected/f71b6dcd-72ea-436a-95f8-70f3ce5af962-kube-api-access-b5j7n\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.585726 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-catalog-content\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.586040 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-utilities\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.621005 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5j7n\" (UniqueName: \"kubernetes.io/projected/f71b6dcd-72ea-436a-95f8-70f3ce5af962-kube-api-access-b5j7n\") pod \"certified-operators-dqjjl\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:23 crc kubenswrapper[4787]: I0126 19:19:23.752377 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:24 crc kubenswrapper[4787]: I0126 19:19:24.344319 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqjjl"] Jan 26 19:19:25 crc kubenswrapper[4787]: I0126 19:19:25.118561 4787 generic.go:334] "Generic (PLEG): container finished" podID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerID="5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b" exitCode=0 Jan 26 19:19:25 crc kubenswrapper[4787]: I0126 19:19:25.118925 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqjjl" event={"ID":"f71b6dcd-72ea-436a-95f8-70f3ce5af962","Type":"ContainerDied","Data":"5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b"} Jan 26 19:19:25 crc kubenswrapper[4787]: I0126 19:19:25.120015 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqjjl" event={"ID":"f71b6dcd-72ea-436a-95f8-70f3ce5af962","Type":"ContainerStarted","Data":"da6bc9e50203a6542f33c242e62d08996cf51b4c264108d512ee3a0cb3c39457"} Jan 26 19:19:27 crc kubenswrapper[4787]: I0126 19:19:27.140396 4787 generic.go:334] "Generic (PLEG): container finished" podID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerID="21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683" exitCode=0 Jan 26 19:19:27 crc kubenswrapper[4787]: I0126 19:19:27.140503 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqjjl" event={"ID":"f71b6dcd-72ea-436a-95f8-70f3ce5af962","Type":"ContainerDied","Data":"21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683"} Jan 26 19:19:29 crc kubenswrapper[4787]: I0126 19:19:29.161422 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqjjl" event={"ID":"f71b6dcd-72ea-436a-95f8-70f3ce5af962","Type":"ContainerStarted","Data":"78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6"} Jan 26 19:19:29 crc kubenswrapper[4787]: I0126 19:19:29.185906 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dqjjl" podStartSLOduration=3.019871373 podStartE2EDuration="6.185884766s" podCreationTimestamp="2026-01-26 19:19:23 +0000 UTC" firstStartedPulling="2026-01-26 19:19:25.12124443 +0000 UTC m=+5733.828380563" lastFinishedPulling="2026-01-26 19:19:28.287257823 +0000 UTC m=+5736.994393956" observedRunningTime="2026-01-26 19:19:29.181817227 +0000 UTC m=+5737.888953360" watchObservedRunningTime="2026-01-26 19:19:29.185884766 +0000 UTC m=+5737.893020899" Jan 26 19:19:33 crc kubenswrapper[4787]: I0126 19:19:33.752775 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:33 crc kubenswrapper[4787]: I0126 19:19:33.753463 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:33 crc kubenswrapper[4787]: I0126 19:19:33.798510 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:34 crc kubenswrapper[4787]: I0126 19:19:34.250873 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:34 crc kubenswrapper[4787]: I0126 19:19:34.305486 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqjjl"] Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.227716 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dqjjl" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="registry-server" containerID="cri-o://78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6" gracePeriod=2 Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.715298 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.836462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-utilities\") pod \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.836675 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-catalog-content\") pod \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.836705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5j7n\" (UniqueName: \"kubernetes.io/projected/f71b6dcd-72ea-436a-95f8-70f3ce5af962-kube-api-access-b5j7n\") pod \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\" (UID: \"f71b6dcd-72ea-436a-95f8-70f3ce5af962\") " Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.837371 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-utilities" (OuterVolumeSpecName: "utilities") pod "f71b6dcd-72ea-436a-95f8-70f3ce5af962" (UID: "f71b6dcd-72ea-436a-95f8-70f3ce5af962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.843277 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71b6dcd-72ea-436a-95f8-70f3ce5af962-kube-api-access-b5j7n" (OuterVolumeSpecName: "kube-api-access-b5j7n") pod "f71b6dcd-72ea-436a-95f8-70f3ce5af962" (UID: "f71b6dcd-72ea-436a-95f8-70f3ce5af962"). InnerVolumeSpecName "kube-api-access-b5j7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.884208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f71b6dcd-72ea-436a-95f8-70f3ce5af962" (UID: "f71b6dcd-72ea-436a-95f8-70f3ce5af962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.939166 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.939219 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71b6dcd-72ea-436a-95f8-70f3ce5af962-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:36 crc kubenswrapper[4787]: I0126 19:19:36.939234 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5j7n\" (UniqueName: \"kubernetes.io/projected/f71b6dcd-72ea-436a-95f8-70f3ce5af962-kube-api-access-b5j7n\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.238652 4787 generic.go:334] "Generic (PLEG): container finished" podID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerID="78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6" exitCode=0 Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.238698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqjjl" event={"ID":"f71b6dcd-72ea-436a-95f8-70f3ce5af962","Type":"ContainerDied","Data":"78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6"} Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.238715 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqjjl" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.238732 4787 scope.go:117] "RemoveContainer" containerID="78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.238723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqjjl" event={"ID":"f71b6dcd-72ea-436a-95f8-70f3ce5af962","Type":"ContainerDied","Data":"da6bc9e50203a6542f33c242e62d08996cf51b4c264108d512ee3a0cb3c39457"} Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.260436 4787 scope.go:117] "RemoveContainer" containerID="21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.275974 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqjjl"] Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.284729 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dqjjl"] Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.289655 4787 scope.go:117] "RemoveContainer" containerID="5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.331464 4787 scope.go:117] "RemoveContainer" containerID="78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6" Jan 26 19:19:37 crc kubenswrapper[4787]: E0126 19:19:37.332068 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6\": container with ID starting with 78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6 not found: ID does not exist" containerID="78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.332107 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6"} err="failed to get container status \"78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6\": rpc error: code = NotFound desc = could not find container \"78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6\": container with ID starting with 78a435af42ab9b7ca4a352ec4978c8e154f3d2a302c85a4a62cf6090f0ad30f6 not found: ID does not exist" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.332132 4787 scope.go:117] "RemoveContainer" containerID="21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683" Jan 26 19:19:37 crc kubenswrapper[4787]: E0126 19:19:37.332487 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683\": container with ID starting with 21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683 not found: ID does not exist" containerID="21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.332569 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683"} err="failed to get container status \"21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683\": rpc error: code = NotFound desc = could not find container \"21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683\": container with ID starting with 21ba7bfe1e61671aa089a5ed18b74d3a9121592e01eb2b73a0811742e4c7b683 not found: ID does not exist" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.332645 4787 scope.go:117] "RemoveContainer" containerID="5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b" Jan 26 19:19:37 crc kubenswrapper[4787]: E0126 19:19:37.332915 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b\": container with ID starting with 5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b not found: ID does not exist" containerID="5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.333038 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b"} err="failed to get container status \"5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b\": rpc error: code = NotFound desc = could not find container \"5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b\": container with ID starting with 5e543862fc32776ecdb2ad3d257753587eefc205f8f40d013eefd3f44975228b not found: ID does not exist" Jan 26 19:19:37 crc kubenswrapper[4787]: I0126 19:19:37.600418 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" path="/var/lib/kubelet/pods/f71b6dcd-72ea-436a-95f8-70f3ce5af962/volumes" Jan 26 19:19:46 crc kubenswrapper[4787]: I0126 19:19:46.808120 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:19:46 crc kubenswrapper[4787]: I0126 19:19:46.808621 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:59 crc kubenswrapper[4787]: I0126 19:19:59.033476 4787 scope.go:117] "RemoveContainer" containerID="ed418755fad2ce15fa67dcfaaabc7c223fe99c03e3b9bc868ac52725f3221c6d" Jan 26 19:19:59 crc kubenswrapper[4787]: I0126 19:19:59.057004 4787 scope.go:117] "RemoveContainer" containerID="484a85ea505224c5ad8e6639403454141916f7a38e154c095439738d06fb911f" Jan 26 19:20:16 crc kubenswrapper[4787]: I0126 19:20:16.808099 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:20:16 crc kubenswrapper[4787]: I0126 19:20:16.808621 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:20:16 crc kubenswrapper[4787]: I0126 19:20:16.808664 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:20:16 crc kubenswrapper[4787]: I0126 19:20:16.809446 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7217ed496ed0b4c2a722c7ac3d16d8a1701ba2e7156a3c1353ad4ba7ede3fc2d"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:20:16 crc kubenswrapper[4787]: I0126 19:20:16.809500 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://7217ed496ed0b4c2a722c7ac3d16d8a1701ba2e7156a3c1353ad4ba7ede3fc2d" gracePeriod=600 Jan 26 19:20:17 crc kubenswrapper[4787]: I0126 19:20:17.584180 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="7217ed496ed0b4c2a722c7ac3d16d8a1701ba2e7156a3c1353ad4ba7ede3fc2d" exitCode=0 Jan 26 19:20:17 crc kubenswrapper[4787]: I0126 19:20:17.584269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"7217ed496ed0b4c2a722c7ac3d16d8a1701ba2e7156a3c1353ad4ba7ede3fc2d"} Jan 26 19:20:17 crc kubenswrapper[4787]: I0126 19:20:17.584721 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3"} Jan 26 19:20:17 crc kubenswrapper[4787]: I0126 19:20:17.584743 4787 scope.go:117] "RemoveContainer" containerID="066ac1f2cf3b213d2b14fba55da068de7b438a9dd3b7a18e26b8368754c3da14" Jan 26 19:20:18 crc kubenswrapper[4787]: I0126 19:20:18.039256 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nrnm9"] Jan 26 19:20:18 crc kubenswrapper[4787]: I0126 19:20:18.047716 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nrnm9"] Jan 26 19:20:19 crc kubenswrapper[4787]: I0126 19:20:19.042674 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f508-account-create-update-2plvr"] Jan 26 19:20:19 crc kubenswrapper[4787]: I0126 19:20:19.057707 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f508-account-create-update-2plvr"] Jan 26 19:20:19 crc kubenswrapper[4787]: I0126 19:20:19.600664 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158f07c3-118d-4640-85ef-154ad9f2c4e8" path="/var/lib/kubelet/pods/158f07c3-118d-4640-85ef-154ad9f2c4e8/volumes" Jan 26 19:20:19 crc kubenswrapper[4787]: I0126 19:20:19.601413 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf090e0a-fc1b-48c1-8678-272ca8ee4901" path="/var/lib/kubelet/pods/cf090e0a-fc1b-48c1-8678-272ca8ee4901/volumes" Jan 26 19:20:25 crc kubenswrapper[4787]: I0126 19:20:25.035302 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-t9r29"] Jan 26 19:20:25 crc kubenswrapper[4787]: I0126 19:20:25.045320 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-t9r29"] Jan 26 19:20:25 crc kubenswrapper[4787]: I0126 19:20:25.601537 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497b87c0-2345-4044-a0eb-f620bc564ad0" path="/var/lib/kubelet/pods/497b87c0-2345-4044-a0eb-f620bc564ad0/volumes" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.901256 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nnjf5"] Jan 26 19:20:30 crc kubenswrapper[4787]: E0126 19:20:30.902173 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="extract-utilities" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.902189 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="extract-utilities" Jan 26 19:20:30 crc kubenswrapper[4787]: E0126 19:20:30.902211 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="extract-content" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.902217 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="extract-content" Jan 26 19:20:30 crc kubenswrapper[4787]: E0126 19:20:30.902235 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="registry-server" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.902241 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="registry-server" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.902391 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71b6dcd-72ea-436a-95f8-70f3ce5af962" containerName="registry-server" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.903111 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.913922 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4nlbm"] Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.914391 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2bhzb" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.915148 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.916360 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.938357 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nnjf5"] Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.951462 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4nlbm"] Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960078 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-run-ovn\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960121 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-run\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960153 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsskb\" (UniqueName: \"kubernetes.io/projected/105e0e3a-23ba-431e-8736-ffce799cf8f2-kube-api-access-nsskb\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960179 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-run\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960419 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-log-ovn\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960502 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-lib\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960568 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-scripts\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960648 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/105e0e3a-23ba-431e-8736-ffce799cf8f2-scripts\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960678 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-etc-ovs\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960714 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-log\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:30 crc kubenswrapper[4787]: I0126 19:20:30.960763 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpd4\" (UniqueName: \"kubernetes.io/projected/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-kube-api-access-sjpd4\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.081221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-log-ovn\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.081356 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-lib\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.081541 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-scripts\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.081816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/105e0e3a-23ba-431e-8736-ffce799cf8f2-scripts\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.081882 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-etc-ovs\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.081966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-log\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082077 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpd4\" (UniqueName: \"kubernetes.io/projected/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-kube-api-access-sjpd4\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082293 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-run-ovn\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082376 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-run\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082469 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsskb\" (UniqueName: \"kubernetes.io/projected/105e0e3a-23ba-431e-8736-ffce799cf8f2-kube-api-access-nsskb\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082467 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-lib\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082547 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-run\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082556 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-log\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082643 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-etc-ovs\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082814 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-run-ovn\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.082878 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-run\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.083171 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-var-run\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.083647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/105e0e3a-23ba-431e-8736-ffce799cf8f2-var-log-ovn\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.086755 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/105e0e3a-23ba-431e-8736-ffce799cf8f2-scripts\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.094261 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-scripts\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.104895 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsskb\" (UniqueName: \"kubernetes.io/projected/105e0e3a-23ba-431e-8736-ffce799cf8f2-kube-api-access-nsskb\") pod \"ovn-controller-nnjf5\" (UID: \"105e0e3a-23ba-431e-8736-ffce799cf8f2\") " pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.123382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpd4\" (UniqueName: \"kubernetes.io/projected/7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14-kube-api-access-sjpd4\") pod \"ovn-controller-ovs-4nlbm\" (UID: \"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14\") " pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.265266 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.280700 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:31 crc kubenswrapper[4787]: I0126 19:20:31.867826 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nnjf5"] Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.091862 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4nlbm"] Jan 26 19:20:32 crc kubenswrapper[4787]: W0126 19:20:32.100097 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac1ad50_9bfa_4569_b68e_d1ef0deb1f14.slice/crio-c5c0a5b7a452743c4e57e3e1d3541645c658748a81124f8ca8f1bda30388988d WatchSource:0}: Error finding container c5c0a5b7a452743c4e57e3e1d3541645c658748a81124f8ca8f1bda30388988d: Status 404 returned error can't find the container with id c5c0a5b7a452743c4e57e3e1d3541645c658748a81124f8ca8f1bda30388988d Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.589234 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7k54j"] Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.590931 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.593508 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.620442 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7k54j"] Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.716706 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nlbm" event={"ID":"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14","Type":"ContainerStarted","Data":"efb3bcf1911514bcac4d5a273f5a9d5e1189340faec5a8f3aec973c5a5490399"} Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.717020 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nlbm" event={"ID":"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14","Type":"ContainerStarted","Data":"c5c0a5b7a452743c4e57e3e1d3541645c658748a81124f8ca8f1bda30388988d"} Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.725348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5" event={"ID":"105e0e3a-23ba-431e-8736-ffce799cf8f2","Type":"ContainerStarted","Data":"23f1a64f21f9573439b018814f71ff507ba72375780058b6b26a1d7792e78f7a"} Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.725402 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5" event={"ID":"105e0e3a-23ba-431e-8736-ffce799cf8f2","Type":"ContainerStarted","Data":"858321c0a49f76de0d67fcbca68c1bef01a215fc465cba502ace28ecfe95e4cc"} Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.725714 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nnjf5" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.725964 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz4xw\" (UniqueName: \"kubernetes.io/projected/2a3faefb-09b0-40ca-b548-c8b3546778ee-kube-api-access-fz4xw\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.726099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3faefb-09b0-40ca-b548-c8b3546778ee-config\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.726160 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2a3faefb-09b0-40ca-b548-c8b3546778ee-ovs-rundir\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.726286 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2a3faefb-09b0-40ca-b548-c8b3546778ee-ovn-rundir\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.790695 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nnjf5" podStartSLOduration=2.790673077 podStartE2EDuration="2.790673077s" podCreationTimestamp="2026-01-26 19:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:20:32.779273651 +0000 UTC m=+5801.486409784" watchObservedRunningTime="2026-01-26 19:20:32.790673077 +0000 UTC m=+5801.497809210" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.827820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2a3faefb-09b0-40ca-b548-c8b3546778ee-ovn-rundir\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.827997 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz4xw\" (UniqueName: \"kubernetes.io/projected/2a3faefb-09b0-40ca-b548-c8b3546778ee-kube-api-access-fz4xw\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.828048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3faefb-09b0-40ca-b548-c8b3546778ee-config\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.828099 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2a3faefb-09b0-40ca-b548-c8b3546778ee-ovs-rundir\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.830317 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3faefb-09b0-40ca-b548-c8b3546778ee-config\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.830473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2a3faefb-09b0-40ca-b548-c8b3546778ee-ovs-rundir\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.830486 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2a3faefb-09b0-40ca-b548-c8b3546778ee-ovn-rundir\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.860748 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz4xw\" (UniqueName: \"kubernetes.io/projected/2a3faefb-09b0-40ca-b548-c8b3546778ee-kube-api-access-fz4xw\") pod \"ovn-controller-metrics-7k54j\" (UID: \"2a3faefb-09b0-40ca-b548-c8b3546778ee\") " pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.913903 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7k54j" Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.974927 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-d6t84"] Jan 26 19:20:32 crc kubenswrapper[4787]: I0126 19:20:32.976097 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.001535 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-d6t84"] Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.034617 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnq9w\" (UniqueName: \"kubernetes.io/projected/889b0a68-86e3-4211-bbb8-40903d0bd246-kube-api-access-wnq9w\") pod \"octavia-db-create-d6t84\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.034778 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889b0a68-86e3-4211-bbb8-40903d0bd246-operator-scripts\") pod \"octavia-db-create-d6t84\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.136442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889b0a68-86e3-4211-bbb8-40903d0bd246-operator-scripts\") pod \"octavia-db-create-d6t84\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.137035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnq9w\" (UniqueName: \"kubernetes.io/projected/889b0a68-86e3-4211-bbb8-40903d0bd246-kube-api-access-wnq9w\") pod \"octavia-db-create-d6t84\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.137426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889b0a68-86e3-4211-bbb8-40903d0bd246-operator-scripts\") pod \"octavia-db-create-d6t84\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.166806 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnq9w\" (UniqueName: \"kubernetes.io/projected/889b0a68-86e3-4211-bbb8-40903d0bd246-kube-api-access-wnq9w\") pod \"octavia-db-create-d6t84\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.366084 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.525840 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7k54j"] Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.652467 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-d6t84"] Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.737905 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d6t84" event={"ID":"889b0a68-86e3-4211-bbb8-40903d0bd246","Type":"ContainerStarted","Data":"b700a914ab1191f3258e709da951a7a41d9c1a23f2f3793b09ab12d90aec7d1d"} Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.739410 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7k54j" event={"ID":"2a3faefb-09b0-40ca-b548-c8b3546778ee","Type":"ContainerStarted","Data":"4e420d1ad605a09afca9f6053f10ddc7fe3fc0c0a8a3b191a32b0e818b0f74d0"} Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.752469 4787 generic.go:334] "Generic (PLEG): container finished" podID="7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14" containerID="efb3bcf1911514bcac4d5a273f5a9d5e1189340faec5a8f3aec973c5a5490399" exitCode=0 Jan 26 19:20:33 crc kubenswrapper[4787]: I0126 19:20:33.752657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nlbm" event={"ID":"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14","Type":"ContainerDied","Data":"efb3bcf1911514bcac4d5a273f5a9d5e1189340faec5a8f3aec973c5a5490399"} Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.764225 4787 generic.go:334] "Generic (PLEG): container finished" podID="889b0a68-86e3-4211-bbb8-40903d0bd246" containerID="de3c40be93447f2219c999d3828317ed1025bd1cbf3866de85c16e8f23be9bce" exitCode=0 Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.764343 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d6t84" event={"ID":"889b0a68-86e3-4211-bbb8-40903d0bd246","Type":"ContainerDied","Data":"de3c40be93447f2219c999d3828317ed1025bd1cbf3866de85c16e8f23be9bce"} Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.767202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7k54j" event={"ID":"2a3faefb-09b0-40ca-b548-c8b3546778ee","Type":"ContainerStarted","Data":"87e8c87fd7b35f7c0eedf33576eaeb951faf87f3bf7eb39a6021a7adf87b8b80"} Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.769738 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nlbm" event={"ID":"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14","Type":"ContainerStarted","Data":"0465eb3ddd5d634261d911e04520d077efe9dda3844869c54d943349ef3cfb9a"} Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.769777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4nlbm" event={"ID":"7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14","Type":"ContainerStarted","Data":"aecb51dfdc86ae9737366a8e65d0c130febe6d6f5b92a6c07f12d46c3b0e047c"} Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.769939 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.769981 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.804542 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-7430-account-create-update-6xfvh"] Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.806090 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.807984 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.824111 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-7430-account-create-update-6xfvh"] Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.828168 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7k54j" podStartSLOduration=2.828144471 podStartE2EDuration="2.828144471s" podCreationTimestamp="2026-01-26 19:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:20:34.800795066 +0000 UTC m=+5803.507931199" watchObservedRunningTime="2026-01-26 19:20:34.828144471 +0000 UTC m=+5803.535280604" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.840269 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4nlbm" podStartSLOduration=4.840221545 podStartE2EDuration="4.840221545s" podCreationTimestamp="2026-01-26 19:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:20:34.83590271 +0000 UTC m=+5803.543038843" watchObservedRunningTime="2026-01-26 19:20:34.840221545 +0000 UTC m=+5803.547357678" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.873573 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7xfv\" (UniqueName: \"kubernetes.io/projected/63118e30-84e6-489f-aaeb-bfec57775e4b-kube-api-access-b7xfv\") pod \"octavia-7430-account-create-update-6xfvh\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.873686 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63118e30-84e6-489f-aaeb-bfec57775e4b-operator-scripts\") pod \"octavia-7430-account-create-update-6xfvh\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.975181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7xfv\" (UniqueName: \"kubernetes.io/projected/63118e30-84e6-489f-aaeb-bfec57775e4b-kube-api-access-b7xfv\") pod \"octavia-7430-account-create-update-6xfvh\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.975298 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63118e30-84e6-489f-aaeb-bfec57775e4b-operator-scripts\") pod \"octavia-7430-account-create-update-6xfvh\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:34 crc kubenswrapper[4787]: I0126 19:20:34.976269 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63118e30-84e6-489f-aaeb-bfec57775e4b-operator-scripts\") pod \"octavia-7430-account-create-update-6xfvh\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:35 crc kubenswrapper[4787]: I0126 19:20:35.007855 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7xfv\" (UniqueName: \"kubernetes.io/projected/63118e30-84e6-489f-aaeb-bfec57775e4b-kube-api-access-b7xfv\") pod \"octavia-7430-account-create-update-6xfvh\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:35 crc kubenswrapper[4787]: I0126 19:20:35.126779 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:35 crc kubenswrapper[4787]: I0126 19:20:35.630647 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-7430-account-create-update-6xfvh"] Jan 26 19:20:35 crc kubenswrapper[4787]: I0126 19:20:35.780282 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7430-account-create-update-6xfvh" event={"ID":"63118e30-84e6-489f-aaeb-bfec57775e4b","Type":"ContainerStarted","Data":"bcd121f5638fe4a7078124768596bb835cec6b109c0f68ac0fd54bf405715eab"} Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.283389 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.406880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889b0a68-86e3-4211-bbb8-40903d0bd246-operator-scripts\") pod \"889b0a68-86e3-4211-bbb8-40903d0bd246\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.407000 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnq9w\" (UniqueName: \"kubernetes.io/projected/889b0a68-86e3-4211-bbb8-40903d0bd246-kube-api-access-wnq9w\") pod \"889b0a68-86e3-4211-bbb8-40903d0bd246\" (UID: \"889b0a68-86e3-4211-bbb8-40903d0bd246\") " Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.411629 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/889b0a68-86e3-4211-bbb8-40903d0bd246-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "889b0a68-86e3-4211-bbb8-40903d0bd246" (UID: "889b0a68-86e3-4211-bbb8-40903d0bd246"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.431650 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889b0a68-86e3-4211-bbb8-40903d0bd246-kube-api-access-wnq9w" (OuterVolumeSpecName: "kube-api-access-wnq9w") pod "889b0a68-86e3-4211-bbb8-40903d0bd246" (UID: "889b0a68-86e3-4211-bbb8-40903d0bd246"). InnerVolumeSpecName "kube-api-access-wnq9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.509049 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889b0a68-86e3-4211-bbb8-40903d0bd246-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.509087 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnq9w\" (UniqueName: \"kubernetes.io/projected/889b0a68-86e3-4211-bbb8-40903d0bd246-kube-api-access-wnq9w\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.790548 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d6t84" event={"ID":"889b0a68-86e3-4211-bbb8-40903d0bd246","Type":"ContainerDied","Data":"b700a914ab1191f3258e709da951a7a41d9c1a23f2f3793b09ab12d90aec7d1d"} Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.790965 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b700a914ab1191f3258e709da951a7a41d9c1a23f2f3793b09ab12d90aec7d1d" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.790574 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d6t84" Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.793053 4787 generic.go:334] "Generic (PLEG): container finished" podID="63118e30-84e6-489f-aaeb-bfec57775e4b" containerID="3085b2e02b754b94e8742f65729e6947d0a483e51b3c26166bb28c84e18f5c9c" exitCode=0 Jan 26 19:20:36 crc kubenswrapper[4787]: I0126 19:20:36.793102 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7430-account-create-update-6xfvh" event={"ID":"63118e30-84e6-489f-aaeb-bfec57775e4b","Type":"ContainerDied","Data":"3085b2e02b754b94e8742f65729e6947d0a483e51b3c26166bb28c84e18f5c9c"} Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.045697 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7bd45"] Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.055101 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7bd45"] Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.238847 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.377542 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7xfv\" (UniqueName: \"kubernetes.io/projected/63118e30-84e6-489f-aaeb-bfec57775e4b-kube-api-access-b7xfv\") pod \"63118e30-84e6-489f-aaeb-bfec57775e4b\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.377615 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63118e30-84e6-489f-aaeb-bfec57775e4b-operator-scripts\") pod \"63118e30-84e6-489f-aaeb-bfec57775e4b\" (UID: \"63118e30-84e6-489f-aaeb-bfec57775e4b\") " Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.378839 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63118e30-84e6-489f-aaeb-bfec57775e4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63118e30-84e6-489f-aaeb-bfec57775e4b" (UID: "63118e30-84e6-489f-aaeb-bfec57775e4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.384480 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63118e30-84e6-489f-aaeb-bfec57775e4b-kube-api-access-b7xfv" (OuterVolumeSpecName: "kube-api-access-b7xfv") pod "63118e30-84e6-489f-aaeb-bfec57775e4b" (UID: "63118e30-84e6-489f-aaeb-bfec57775e4b"). InnerVolumeSpecName "kube-api-access-b7xfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.479553 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7xfv\" (UniqueName: \"kubernetes.io/projected/63118e30-84e6-489f-aaeb-bfec57775e4b-kube-api-access-b7xfv\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.479866 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63118e30-84e6-489f-aaeb-bfec57775e4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.814465 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7430-account-create-update-6xfvh" Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.814731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7430-account-create-update-6xfvh" event={"ID":"63118e30-84e6-489f-aaeb-bfec57775e4b","Type":"ContainerDied","Data":"bcd121f5638fe4a7078124768596bb835cec6b109c0f68ac0fd54bf405715eab"} Jan 26 19:20:38 crc kubenswrapper[4787]: I0126 19:20:38.814855 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd121f5638fe4a7078124768596bb835cec6b109c0f68ac0fd54bf405715eab" Jan 26 19:20:39 crc kubenswrapper[4787]: I0126 19:20:39.603570 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b42381c-1dc3-4245-9d23-c0eec94f6ae1" path="/var/lib/kubelet/pods/5b42381c-1dc3-4245-9d23-c0eec94f6ae1/volumes" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.853907 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-87b7d"] Jan 26 19:20:40 crc kubenswrapper[4787]: E0126 19:20:40.854827 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63118e30-84e6-489f-aaeb-bfec57775e4b" containerName="mariadb-account-create-update" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.854847 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="63118e30-84e6-489f-aaeb-bfec57775e4b" containerName="mariadb-account-create-update" Jan 26 19:20:40 crc kubenswrapper[4787]: E0126 19:20:40.854870 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889b0a68-86e3-4211-bbb8-40903d0bd246" containerName="mariadb-database-create" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.854878 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="889b0a68-86e3-4211-bbb8-40903d0bd246" containerName="mariadb-database-create" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.855102 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="63118e30-84e6-489f-aaeb-bfec57775e4b" containerName="mariadb-account-create-update" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.855123 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="889b0a68-86e3-4211-bbb8-40903d0bd246" containerName="mariadb-database-create" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.855906 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.866018 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-87b7d"] Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.924933 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blj4\" (UniqueName: \"kubernetes.io/projected/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-kube-api-access-8blj4\") pod \"octavia-persistence-db-create-87b7d\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:40 crc kubenswrapper[4787]: I0126 19:20:40.925159 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-operator-scripts\") pod \"octavia-persistence-db-create-87b7d\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.026871 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-operator-scripts\") pod \"octavia-persistence-db-create-87b7d\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.026993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8blj4\" (UniqueName: \"kubernetes.io/projected/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-kube-api-access-8blj4\") pod \"octavia-persistence-db-create-87b7d\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.027835 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-operator-scripts\") pod \"octavia-persistence-db-create-87b7d\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.049930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8blj4\" (UniqueName: \"kubernetes.io/projected/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-kube-api-access-8blj4\") pod \"octavia-persistence-db-create-87b7d\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.181060 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.657175 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-87b7d"] Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.848418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-87b7d" event={"ID":"b7cb052f-a346-4b9e-82bd-4cb6bcef7563","Type":"ContainerStarted","Data":"4991f36a0ac91005ba580be429a9641012fe97510b1b38b91f66bd914469e711"} Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.872428 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-0e5d-account-create-update-df54g"] Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.873761 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.876050 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.882823 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-0e5d-account-create-update-df54g"] Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.949587 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg8wq\" (UniqueName: \"kubernetes.io/projected/ebee04d5-772e-4f11-9ca5-a24268283f6d-kube-api-access-sg8wq\") pod \"octavia-0e5d-account-create-update-df54g\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:41 crc kubenswrapper[4787]: I0126 19:20:41.949693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebee04d5-772e-4f11-9ca5-a24268283f6d-operator-scripts\") pod \"octavia-0e5d-account-create-update-df54g\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.052035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg8wq\" (UniqueName: \"kubernetes.io/projected/ebee04d5-772e-4f11-9ca5-a24268283f6d-kube-api-access-sg8wq\") pod \"octavia-0e5d-account-create-update-df54g\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.052105 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebee04d5-772e-4f11-9ca5-a24268283f6d-operator-scripts\") pod \"octavia-0e5d-account-create-update-df54g\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.053311 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebee04d5-772e-4f11-9ca5-a24268283f6d-operator-scripts\") pod \"octavia-0e5d-account-create-update-df54g\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.073164 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg8wq\" (UniqueName: \"kubernetes.io/projected/ebee04d5-772e-4f11-9ca5-a24268283f6d-kube-api-access-sg8wq\") pod \"octavia-0e5d-account-create-update-df54g\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.195848 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.691360 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-0e5d-account-create-update-df54g"] Jan 26 19:20:42 crc kubenswrapper[4787]: W0126 19:20:42.697158 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebee04d5_772e_4f11_9ca5_a24268283f6d.slice/crio-b5d65181412cf0131eb81eaedd491ef7215e4ba7de42a0805c0bc38ec1fe66cd WatchSource:0}: Error finding container b5d65181412cf0131eb81eaedd491ef7215e4ba7de42a0805c0bc38ec1fe66cd: Status 404 returned error can't find the container with id b5d65181412cf0131eb81eaedd491ef7215e4ba7de42a0805c0bc38ec1fe66cd Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.859035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0e5d-account-create-update-df54g" event={"ID":"ebee04d5-772e-4f11-9ca5-a24268283f6d","Type":"ContainerStarted","Data":"b5d65181412cf0131eb81eaedd491ef7215e4ba7de42a0805c0bc38ec1fe66cd"} Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.860917 4787 generic.go:334] "Generic (PLEG): container finished" podID="b7cb052f-a346-4b9e-82bd-4cb6bcef7563" containerID="df048c0b0c75be9019fc3a7efebbd4a9c72355b8caf02beac89570786b591688" exitCode=0 Jan 26 19:20:42 crc kubenswrapper[4787]: I0126 19:20:42.861011 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-87b7d" event={"ID":"b7cb052f-a346-4b9e-82bd-4cb6bcef7563","Type":"ContainerDied","Data":"df048c0b0c75be9019fc3a7efebbd4a9c72355b8caf02beac89570786b591688"} Jan 26 19:20:43 crc kubenswrapper[4787]: I0126 19:20:43.871564 4787 generic.go:334] "Generic (PLEG): container finished" podID="ebee04d5-772e-4f11-9ca5-a24268283f6d" containerID="b8218d050cde4348df551cdb4999fce44cd075df785e6d95ae0fa0e0a459ed3a" exitCode=0 Jan 26 19:20:43 crc kubenswrapper[4787]: I0126 19:20:43.871629 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0e5d-account-create-update-df54g" event={"ID":"ebee04d5-772e-4f11-9ca5-a24268283f6d","Type":"ContainerDied","Data":"b8218d050cde4348df551cdb4999fce44cd075df785e6d95ae0fa0e0a459ed3a"} Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.225574 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.297538 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-operator-scripts\") pod \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.297660 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8blj4\" (UniqueName: \"kubernetes.io/projected/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-kube-api-access-8blj4\") pod \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\" (UID: \"b7cb052f-a346-4b9e-82bd-4cb6bcef7563\") " Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.298669 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7cb052f-a346-4b9e-82bd-4cb6bcef7563" (UID: "b7cb052f-a346-4b9e-82bd-4cb6bcef7563"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.303532 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-kube-api-access-8blj4" (OuterVolumeSpecName: "kube-api-access-8blj4") pod "b7cb052f-a346-4b9e-82bd-4cb6bcef7563" (UID: "b7cb052f-a346-4b9e-82bd-4cb6bcef7563"). InnerVolumeSpecName "kube-api-access-8blj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.399861 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.399904 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8blj4\" (UniqueName: \"kubernetes.io/projected/b7cb052f-a346-4b9e-82bd-4cb6bcef7563-kube-api-access-8blj4\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.880673 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-87b7d" Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.880673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-87b7d" event={"ID":"b7cb052f-a346-4b9e-82bd-4cb6bcef7563","Type":"ContainerDied","Data":"4991f36a0ac91005ba580be429a9641012fe97510b1b38b91f66bd914469e711"} Jan 26 19:20:44 crc kubenswrapper[4787]: I0126 19:20:44.881111 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4991f36a0ac91005ba580be429a9641012fe97510b1b38b91f66bd914469e711" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.504472 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.628047 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebee04d5-772e-4f11-9ca5-a24268283f6d-operator-scripts\") pod \"ebee04d5-772e-4f11-9ca5-a24268283f6d\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.628133 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg8wq\" (UniqueName: \"kubernetes.io/projected/ebee04d5-772e-4f11-9ca5-a24268283f6d-kube-api-access-sg8wq\") pod \"ebee04d5-772e-4f11-9ca5-a24268283f6d\" (UID: \"ebee04d5-772e-4f11-9ca5-a24268283f6d\") " Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.628743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebee04d5-772e-4f11-9ca5-a24268283f6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebee04d5-772e-4f11-9ca5-a24268283f6d" (UID: "ebee04d5-772e-4f11-9ca5-a24268283f6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.634666 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebee04d5-772e-4f11-9ca5-a24268283f6d-kube-api-access-sg8wq" (OuterVolumeSpecName: "kube-api-access-sg8wq") pod "ebee04d5-772e-4f11-9ca5-a24268283f6d" (UID: "ebee04d5-772e-4f11-9ca5-a24268283f6d"). InnerVolumeSpecName "kube-api-access-sg8wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.731340 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebee04d5-772e-4f11-9ca5-a24268283f6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.731386 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg8wq\" (UniqueName: \"kubernetes.io/projected/ebee04d5-772e-4f11-9ca5-a24268283f6d-kube-api-access-sg8wq\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.891621 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-0e5d-account-create-update-df54g" event={"ID":"ebee04d5-772e-4f11-9ca5-a24268283f6d","Type":"ContainerDied","Data":"b5d65181412cf0131eb81eaedd491ef7215e4ba7de42a0805c0bc38ec1fe66cd"} Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.891960 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d65181412cf0131eb81eaedd491ef7215e4ba7de42a0805c0bc38ec1fe66cd" Jan 26 19:20:45 crc kubenswrapper[4787]: I0126 19:20:45.891676 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-0e5d-account-create-update-df54g" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.953541 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-7bfb695c56-2zk57"] Jan 26 19:20:47 crc kubenswrapper[4787]: E0126 19:20:47.954344 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cb052f-a346-4b9e-82bd-4cb6bcef7563" containerName="mariadb-database-create" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.954359 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cb052f-a346-4b9e-82bd-4cb6bcef7563" containerName="mariadb-database-create" Jan 26 19:20:47 crc kubenswrapper[4787]: E0126 19:20:47.954375 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebee04d5-772e-4f11-9ca5-a24268283f6d" containerName="mariadb-account-create-update" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.954381 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebee04d5-772e-4f11-9ca5-a24268283f6d" containerName="mariadb-account-create-update" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.954545 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cb052f-a346-4b9e-82bd-4cb6bcef7563" containerName="mariadb-database-create" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.954574 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebee04d5-772e-4f11-9ca5-a24268283f6d" containerName="mariadb-account-create-update" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.955971 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.959006 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.959038 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-hdwmd" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.959224 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 26 19:20:47 crc kubenswrapper[4787]: I0126 19:20:47.964569 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7bfb695c56-2zk57"] Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.072887 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-scripts\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.073100 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/fc21cbde-c282-4e55-80f9-3c12ded80c02-octavia-run\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.073159 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fc21cbde-c282-4e55-80f9-3c12ded80c02-config-data-merged\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.073303 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-combined-ca-bundle\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.073548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-config-data\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.175278 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-config-data\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.175413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-scripts\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.175461 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/fc21cbde-c282-4e55-80f9-3c12ded80c02-octavia-run\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.175485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fc21cbde-c282-4e55-80f9-3c12ded80c02-config-data-merged\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.175510 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-combined-ca-bundle\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.175921 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fc21cbde-c282-4e55-80f9-3c12ded80c02-config-data-merged\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.176227 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/fc21cbde-c282-4e55-80f9-3c12ded80c02-octavia-run\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.183762 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-scripts\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.183762 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-combined-ca-bundle\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.184097 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc21cbde-c282-4e55-80f9-3c12ded80c02-config-data\") pod \"octavia-api-7bfb695c56-2zk57\" (UID: \"fc21cbde-c282-4e55-80f9-3c12ded80c02\") " pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.288332 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:20:48 crc kubenswrapper[4787]: W0126 19:20:48.791752 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc21cbde_c282_4e55_80f9_3c12ded80c02.slice/crio-95a810f31f50067790104c119b57b8951c918c31521ed4b8852b385d4aea9134 WatchSource:0}: Error finding container 95a810f31f50067790104c119b57b8951c918c31521ed4b8852b385d4aea9134: Status 404 returned error can't find the container with id 95a810f31f50067790104c119b57b8951c918c31521ed4b8852b385d4aea9134 Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.792268 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7bfb695c56-2zk57"] Jan 26 19:20:48 crc kubenswrapper[4787]: I0126 19:20:48.929629 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7bfb695c56-2zk57" event={"ID":"fc21cbde-c282-4e55-80f9-3c12ded80c02","Type":"ContainerStarted","Data":"95a810f31f50067790104c119b57b8951c918c31521ed4b8852b385d4aea9134"} Jan 26 19:20:59 crc kubenswrapper[4787]: I0126 19:20:59.034993 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7bfb695c56-2zk57" event={"ID":"fc21cbde-c282-4e55-80f9-3c12ded80c02","Type":"ContainerStarted","Data":"c93f7cc3e37439285e188e7f09540868aa1afc3664d18e3cb173b8aef52cdd22"} Jan 26 19:20:59 crc kubenswrapper[4787]: I0126 19:20:59.139102 4787 scope.go:117] "RemoveContainer" containerID="4fc99f4e88a1afc4d779c61641f5993141e81174a45c2b7d6b424dc51f135af9" Jan 26 19:20:59 crc kubenswrapper[4787]: I0126 19:20:59.171973 4787 scope.go:117] "RemoveContainer" containerID="f22587002addd2fce599fe8874abdc705cbe9a6453697670e1579de379d32e12" Jan 26 19:20:59 crc kubenswrapper[4787]: I0126 19:20:59.207519 4787 scope.go:117] "RemoveContainer" containerID="71d0a94f19b26b3bf57fd6be788e894b6ac86e87a5147dbf833144ac05f414c2" Jan 26 19:20:59 crc kubenswrapper[4787]: I0126 19:20:59.355564 4787 scope.go:117] "RemoveContainer" containerID="347c2e1063162c2ff65c8a5391d8e719ce7c0e8559659d8a2b9d34278980b55b" Jan 26 19:21:00 crc kubenswrapper[4787]: I0126 19:21:00.046222 4787 generic.go:334] "Generic (PLEG): container finished" podID="fc21cbde-c282-4e55-80f9-3c12ded80c02" containerID="c93f7cc3e37439285e188e7f09540868aa1afc3664d18e3cb173b8aef52cdd22" exitCode=0 Jan 26 19:21:00 crc kubenswrapper[4787]: I0126 19:21:00.046314 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7bfb695c56-2zk57" event={"ID":"fc21cbde-c282-4e55-80f9-3c12ded80c02","Type":"ContainerDied","Data":"c93f7cc3e37439285e188e7f09540868aa1afc3664d18e3cb173b8aef52cdd22"} Jan 26 19:21:01 crc kubenswrapper[4787]: I0126 19:21:01.056579 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7bfb695c56-2zk57" event={"ID":"fc21cbde-c282-4e55-80f9-3c12ded80c02","Type":"ContainerStarted","Data":"8a98019495631b938de77658866d382609ad89b4c4969a149a0252eaccbd478f"} Jan 26 19:21:02 crc kubenswrapper[4787]: I0126 19:21:02.067077 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7bfb695c56-2zk57" event={"ID":"fc21cbde-c282-4e55-80f9-3c12ded80c02","Type":"ContainerStarted","Data":"ef575c4f77248b7b3bf7929fdcc223073f6d4ba0be1cc2a0ef688775c2d60a3e"} Jan 26 19:21:02 crc kubenswrapper[4787]: I0126 19:21:02.067318 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:21:02 crc kubenswrapper[4787]: I0126 19:21:02.098392 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-7bfb695c56-2zk57" podStartSLOduration=5.297909235 podStartE2EDuration="15.098373127s" podCreationTimestamp="2026-01-26 19:20:47 +0000 UTC" firstStartedPulling="2026-01-26 19:20:48.79529937 +0000 UTC m=+5817.502435503" lastFinishedPulling="2026-01-26 19:20:58.595763262 +0000 UTC m=+5827.302899395" observedRunningTime="2026-01-26 19:21:02.096176154 +0000 UTC m=+5830.803312287" watchObservedRunningTime="2026-01-26 19:21:02.098373127 +0000 UTC m=+5830.805509280" Jan 26 19:21:03 crc kubenswrapper[4787]: I0126 19:21:03.079417 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.304181 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nnjf5" podUID="105e0e3a-23ba-431e-8736-ffce799cf8f2" containerName="ovn-controller" probeResult="failure" output=< Jan 26 19:21:06 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 19:21:06 crc kubenswrapper[4787]: > Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.321966 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.322072 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4nlbm" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.429343 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nnjf5-config-dnqgd"] Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.430790 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.434905 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.442632 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nnjf5-config-dnqgd"] Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.571377 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-additional-scripts\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.571470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-scripts\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.571525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdxv\" (UniqueName: \"kubernetes.io/projected/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-kube-api-access-jsdxv\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.571553 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run-ovn\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.571574 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-log-ovn\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.571977 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.673703 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-additional-scripts\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.673818 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-scripts\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.673877 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdxv\" (UniqueName: \"kubernetes.io/projected/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-kube-api-access-jsdxv\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.673928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run-ovn\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.673971 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-log-ovn\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.674081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.675239 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run-ovn\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.675284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-additional-scripts\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.675378 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-log-ovn\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.675609 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.678509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-scripts\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.701368 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdxv\" (UniqueName: \"kubernetes.io/projected/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-kube-api-access-jsdxv\") pod \"ovn-controller-nnjf5-config-dnqgd\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:06 crc kubenswrapper[4787]: I0126 19:21:06.756258 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:07 crc kubenswrapper[4787]: I0126 19:21:07.236321 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nnjf5-config-dnqgd"] Jan 26 19:21:08 crc kubenswrapper[4787]: I0126 19:21:08.127872 4787 generic.go:334] "Generic (PLEG): container finished" podID="4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" containerID="fb9bc80d9bd68694787d1ee24108afbec0ec732b6fc92b60dc782228cddf42ff" exitCode=0 Jan 26 19:21:08 crc kubenswrapper[4787]: I0126 19:21:08.127995 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5-config-dnqgd" event={"ID":"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0","Type":"ContainerDied","Data":"fb9bc80d9bd68694787d1ee24108afbec0ec732b6fc92b60dc782228cddf42ff"} Jan 26 19:21:08 crc kubenswrapper[4787]: I0126 19:21:08.134876 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5-config-dnqgd" event={"ID":"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0","Type":"ContainerStarted","Data":"3f31afa01a570c79e11b5c9ffae39a9ad5ac7afd3442c1e321838fa3126dbe8e"} Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.672241 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.740255 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-scripts\") pod \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.740316 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-log-ovn\") pod \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.740374 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run\") pod \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.740510 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run-ovn\") pod \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.740551 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsdxv\" (UniqueName: \"kubernetes.io/projected/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-kube-api-access-jsdxv\") pod \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.740708 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-additional-scripts\") pod \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\" (UID: \"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0\") " Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.741675 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" (UID: "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.742682 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run" (OuterVolumeSpecName: "var-run") pod "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" (UID: "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.742713 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-scripts" (OuterVolumeSpecName: "scripts") pod "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" (UID: "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.742771 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" (UID: "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.743137 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" (UID: "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.757822 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-kube-api-access-jsdxv" (OuterVolumeSpecName: "kube-api-access-jsdxv") pod "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" (UID: "4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0"). InnerVolumeSpecName "kube-api-access-jsdxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.843367 4787 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.843409 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.843419 4787 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.843428 4787 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.843440 4787 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:09 crc kubenswrapper[4787]: I0126 19:21:09.843452 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsdxv\" (UniqueName: \"kubernetes.io/projected/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0-kube-api-access-jsdxv\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.156236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5-config-dnqgd" event={"ID":"4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0","Type":"ContainerDied","Data":"3f31afa01a570c79e11b5c9ffae39a9ad5ac7afd3442c1e321838fa3126dbe8e"} Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.156484 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f31afa01a570c79e11b5c9ffae39a9ad5ac7afd3442c1e321838fa3126dbe8e" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.156285 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-dnqgd" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.762750 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nnjf5-config-dnqgd"] Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.778292 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nnjf5-config-dnqgd"] Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.888856 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nnjf5-config-qrsk6"] Jan 26 19:21:10 crc kubenswrapper[4787]: E0126 19:21:10.889364 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" containerName="ovn-config" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.889387 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" containerName="ovn-config" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.889620 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" containerName="ovn-config" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.890479 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.894756 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.899261 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nnjf5-config-qrsk6"] Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.963086 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run-ovn\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.963275 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-log-ovn\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.963359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-scripts\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.963427 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-additional-scripts\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.963510 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb5d\" (UniqueName: \"kubernetes.io/projected/c6055a27-6387-4a70-b4fa-9d877e36b9e2-kube-api-access-dnb5d\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:10 crc kubenswrapper[4787]: I0126 19:21:10.963573 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.065735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-log-ovn\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.065834 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-scripts\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.065874 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-additional-scripts\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.065930 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnb5d\" (UniqueName: \"kubernetes.io/projected/c6055a27-6387-4a70-b4fa-9d877e36b9e2-kube-api-access-dnb5d\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.066002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.066084 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run-ovn\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.066507 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run-ovn\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.066582 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-log-ovn\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.068176 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.069010 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-additional-scripts\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.069666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-scripts\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.085672 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnb5d\" (UniqueName: \"kubernetes.io/projected/c6055a27-6387-4a70-b4fa-9d877e36b9e2-kube-api-access-dnb5d\") pod \"ovn-controller-nnjf5-config-qrsk6\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.208716 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.339204 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nnjf5" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.601111 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0" path="/var/lib/kubelet/pods/4531fa78-4ca9-43aa-bb6a-ac3f2cc659e0/volumes" Jan 26 19:21:11 crc kubenswrapper[4787]: I0126 19:21:11.714982 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nnjf5-config-qrsk6"] Jan 26 19:21:12 crc kubenswrapper[4787]: I0126 19:21:12.178158 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5-config-qrsk6" event={"ID":"c6055a27-6387-4a70-b4fa-9d877e36b9e2","Type":"ContainerStarted","Data":"37825d6251dc5111aad182e19f10adab3b49f46788c07fd612dc2da6c73d6bf5"} Jan 26 19:21:12 crc kubenswrapper[4787]: I0126 19:21:12.178568 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5-config-qrsk6" event={"ID":"c6055a27-6387-4a70-b4fa-9d877e36b9e2","Type":"ContainerStarted","Data":"07119b614762f956e4f7955f8364eb6ea4f48e58d8320e3b14ba0a84b27506a6"} Jan 26 19:21:12 crc kubenswrapper[4787]: I0126 19:21:12.196937 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nnjf5-config-qrsk6" podStartSLOduration=2.196918405 podStartE2EDuration="2.196918405s" podCreationTimestamp="2026-01-26 19:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:21:12.193822979 +0000 UTC m=+5840.900959122" watchObservedRunningTime="2026-01-26 19:21:12.196918405 +0000 UTC m=+5840.904054528" Jan 26 19:21:13 crc kubenswrapper[4787]: I0126 19:21:13.188078 4787 generic.go:334] "Generic (PLEG): container finished" podID="c6055a27-6387-4a70-b4fa-9d877e36b9e2" containerID="37825d6251dc5111aad182e19f10adab3b49f46788c07fd612dc2da6c73d6bf5" exitCode=0 Jan 26 19:21:13 crc kubenswrapper[4787]: I0126 19:21:13.188122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nnjf5-config-qrsk6" event={"ID":"c6055a27-6387-4a70-b4fa-9d877e36b9e2","Type":"ContainerDied","Data":"37825d6251dc5111aad182e19f10adab3b49f46788c07fd612dc2da6c73d6bf5"} Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.606085 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.631815 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run\") pod \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.631895 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-additional-scripts\") pod \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.631935 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-scripts\") pod \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632146 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run-ovn\") pod \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632177 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-log-ovn\") pod \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632239 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnb5d\" (UniqueName: \"kubernetes.io/projected/c6055a27-6387-4a70-b4fa-9d877e36b9e2-kube-api-access-dnb5d\") pod \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\" (UID: \"c6055a27-6387-4a70-b4fa-9d877e36b9e2\") " Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632345 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run" (OuterVolumeSpecName: "var-run") pod "c6055a27-6387-4a70-b4fa-9d877e36b9e2" (UID: "c6055a27-6387-4a70-b4fa-9d877e36b9e2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632385 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c6055a27-6387-4a70-b4fa-9d877e36b9e2" (UID: "c6055a27-6387-4a70-b4fa-9d877e36b9e2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632406 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c6055a27-6387-4a70-b4fa-9d877e36b9e2" (UID: "c6055a27-6387-4a70-b4fa-9d877e36b9e2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632709 4787 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632723 4787 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632733 4787 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6055a27-6387-4a70-b4fa-9d877e36b9e2-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.632971 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c6055a27-6387-4a70-b4fa-9d877e36b9e2" (UID: "c6055a27-6387-4a70-b4fa-9d877e36b9e2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.633295 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-scripts" (OuterVolumeSpecName: "scripts") pod "c6055a27-6387-4a70-b4fa-9d877e36b9e2" (UID: "c6055a27-6387-4a70-b4fa-9d877e36b9e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.638444 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6055a27-6387-4a70-b4fa-9d877e36b9e2-kube-api-access-dnb5d" (OuterVolumeSpecName: "kube-api-access-dnb5d") pod "c6055a27-6387-4a70-b4fa-9d877e36b9e2" (UID: "c6055a27-6387-4a70-b4fa-9d877e36b9e2"). InnerVolumeSpecName "kube-api-access-dnb5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.734732 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnb5d\" (UniqueName: \"kubernetes.io/projected/c6055a27-6387-4a70-b4fa-9d877e36b9e2-kube-api-access-dnb5d\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.734767 4787 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.734781 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6055a27-6387-4a70-b4fa-9d877e36b9e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.780886 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nnjf5-config-qrsk6"] Jan 26 19:21:14 crc kubenswrapper[4787]: I0126 19:21:14.790979 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nnjf5-config-qrsk6"] Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.204692 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07119b614762f956e4f7955f8364eb6ea4f48e58d8320e3b14ba0a84b27506a6" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.204751 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nnjf5-config-qrsk6" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.602313 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6055a27-6387-4a70-b4fa-9d877e36b9e2" path="/var/lib/kubelet/pods/c6055a27-6387-4a70-b4fa-9d877e36b9e2/volumes" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.639869 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-k8plr"] Jan 26 19:21:15 crc kubenswrapper[4787]: E0126 19:21:15.641561 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6055a27-6387-4a70-b4fa-9d877e36b9e2" containerName="ovn-config" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.641672 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6055a27-6387-4a70-b4fa-9d877e36b9e2" containerName="ovn-config" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.642186 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6055a27-6387-4a70-b4fa-9d877e36b9e2" containerName="ovn-config" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.644203 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.649873 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.650316 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.651276 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.652939 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a3da9-d799-446d-ae5e-41ab628669b9-scripts\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.653226 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/578a3da9-d799-446d-ae5e-41ab628669b9-hm-ports\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.653615 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/578a3da9-d799-446d-ae5e-41ab628669b9-config-data-merged\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.653791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a3da9-d799-446d-ae5e-41ab628669b9-config-data\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.689145 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-k8plr"] Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.756376 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a3da9-d799-446d-ae5e-41ab628669b9-config-data\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.757414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a3da9-d799-446d-ae5e-41ab628669b9-scripts\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.757709 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/578a3da9-d799-446d-ae5e-41ab628669b9-hm-ports\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.757909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/578a3da9-d799-446d-ae5e-41ab628669b9-config-data-merged\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.758473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/578a3da9-d799-446d-ae5e-41ab628669b9-config-data-merged\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.758685 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/578a3da9-d799-446d-ae5e-41ab628669b9-hm-ports\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.761815 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a3da9-d799-446d-ae5e-41ab628669b9-scripts\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.762369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a3da9-d799-446d-ae5e-41ab628669b9-config-data\") pod \"octavia-rsyslog-k8plr\" (UID: \"578a3da9-d799-446d-ae5e-41ab628669b9\") " pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:15 crc kubenswrapper[4787]: I0126 19:21:15.972282 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.205930 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-cd5xf"] Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.207969 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.210604 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.237377 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-cd5xf"] Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.373299 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/6278f881-4d92-46b7-afdc-9e4958beb386-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-cd5xf\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.374482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6278f881-4d92-46b7-afdc-9e4958beb386-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-cd5xf\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.477288 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/6278f881-4d92-46b7-afdc-9e4958beb386-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-cd5xf\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.477651 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6278f881-4d92-46b7-afdc-9e4958beb386-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-cd5xf\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.477749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/6278f881-4d92-46b7-afdc-9e4958beb386-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-cd5xf\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.484294 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6278f881-4d92-46b7-afdc-9e4958beb386-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-cd5xf\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.537297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.675769 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-k8plr"] Jan 26 19:21:16 crc kubenswrapper[4787]: I0126 19:21:16.760845 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-k8plr"] Jan 26 19:21:17 crc kubenswrapper[4787]: W0126 19:21:17.072800 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6278f881_4d92_46b7_afdc_9e4958beb386.slice/crio-ad529c97c81d169ab300e9d8b58e8ab6b491864db53ddc4a66ce7e117579f726 WatchSource:0}: Error finding container ad529c97c81d169ab300e9d8b58e8ab6b491864db53ddc4a66ce7e117579f726: Status 404 returned error can't find the container with id ad529c97c81d169ab300e9d8b58e8ab6b491864db53ddc4a66ce7e117579f726 Jan 26 19:21:17 crc kubenswrapper[4787]: I0126 19:21:17.073056 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-cd5xf"] Jan 26 19:21:17 crc kubenswrapper[4787]: I0126 19:21:17.239829 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-k8plr" event={"ID":"578a3da9-d799-446d-ae5e-41ab628669b9","Type":"ContainerStarted","Data":"55bc52e37845616be924c10850c6f1d7aca971379d9cde5f8a37d9cfbeed143b"} Jan 26 19:21:17 crc kubenswrapper[4787]: I0126 19:21:17.241636 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" event={"ID":"6278f881-4d92-46b7-afdc-9e4958beb386","Type":"ContainerStarted","Data":"ad529c97c81d169ab300e9d8b58e8ab6b491864db53ddc4a66ce7e117579f726"} Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.873500 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-s566r"] Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.878176 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.882379 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.894924 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-combined-ca-bundle\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.895036 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data-merged\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.895079 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.895148 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-scripts\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.914099 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-s566r"] Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.995659 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data-merged\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.995721 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.995781 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-scripts\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.995812 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-combined-ca-bundle\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:21 crc kubenswrapper[4787]: I0126 19:21:21.997484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data-merged\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:22 crc kubenswrapper[4787]: I0126 19:21:22.002013 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-combined-ca-bundle\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:22 crc kubenswrapper[4787]: I0126 19:21:22.014122 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:22 crc kubenswrapper[4787]: I0126 19:21:22.015642 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-scripts\") pod \"octavia-db-sync-s566r\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:22 crc kubenswrapper[4787]: I0126 19:21:22.208876 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:22 crc kubenswrapper[4787]: I0126 19:21:22.318388 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-k8plr" event={"ID":"578a3da9-d799-446d-ae5e-41ab628669b9","Type":"ContainerStarted","Data":"edf1f52bb5f98974daca530440a8078a160d92946ac33da8b19554af0013d0e0"} Jan 26 19:21:22 crc kubenswrapper[4787]: I0126 19:21:22.787399 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-s566r"] Jan 26 19:21:22 crc kubenswrapper[4787]: W0126 19:21:22.795169 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e153ee6_c310_414d_973f_13e5cd1b936c.slice/crio-1ed3b11d745cee9cf3db69798402dea50dd12c2dd6665e0bd078500594ed634e WatchSource:0}: Error finding container 1ed3b11d745cee9cf3db69798402dea50dd12c2dd6665e0bd078500594ed634e: Status 404 returned error can't find the container with id 1ed3b11d745cee9cf3db69798402dea50dd12c2dd6665e0bd078500594ed634e Jan 26 19:21:23 crc kubenswrapper[4787]: I0126 19:21:23.331375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-s566r" event={"ID":"3e153ee6-c310-414d-973f-13e5cd1b936c","Type":"ContainerStarted","Data":"1ed3b11d745cee9cf3db69798402dea50dd12c2dd6665e0bd078500594ed634e"} Jan 26 19:21:23 crc kubenswrapper[4787]: I0126 19:21:23.723186 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:21:23 crc kubenswrapper[4787]: I0126 19:21:23.761599 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7bfb695c56-2zk57" Jan 26 19:21:24 crc kubenswrapper[4787]: I0126 19:21:24.349353 4787 generic.go:334] "Generic (PLEG): container finished" podID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerID="eb4008169cc31a07489b4f1ec9bd5ff5d08cb7cb48614378d87d11d4af287ba3" exitCode=0 Jan 26 19:21:24 crc kubenswrapper[4787]: I0126 19:21:24.349596 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-s566r" event={"ID":"3e153ee6-c310-414d-973f-13e5cd1b936c","Type":"ContainerDied","Data":"eb4008169cc31a07489b4f1ec9bd5ff5d08cb7cb48614378d87d11d4af287ba3"} Jan 26 19:21:24 crc kubenswrapper[4787]: I0126 19:21:24.353600 4787 generic.go:334] "Generic (PLEG): container finished" podID="578a3da9-d799-446d-ae5e-41ab628669b9" containerID="edf1f52bb5f98974daca530440a8078a160d92946ac33da8b19554af0013d0e0" exitCode=0 Jan 26 19:21:24 crc kubenswrapper[4787]: I0126 19:21:24.353704 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-k8plr" event={"ID":"578a3da9-d799-446d-ae5e-41ab628669b9","Type":"ContainerDied","Data":"edf1f52bb5f98974daca530440a8078a160d92946ac33da8b19554af0013d0e0"} Jan 26 19:21:33 crc kubenswrapper[4787]: E0126 19:21:33.698832 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/gthiemonge/octavia-amphora-image:latest" Jan 26 19:21:33 crc kubenswrapper[4787]: E0126 19:21:33.699599 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-7b97d6bc64-cd5xf_openstack(6278f881-4d92-46b7-afdc-9e4958beb386): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:21:33 crc kubenswrapper[4787]: E0126 19:21:33.701041 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" Jan 26 19:21:34 crc kubenswrapper[4787]: I0126 19:21:34.464093 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-s566r" event={"ID":"3e153ee6-c310-414d-973f-13e5cd1b936c","Type":"ContainerStarted","Data":"36bd33021285315efbada5121525354e78ae4627f10b0413943aed5bf95820da"} Jan 26 19:21:34 crc kubenswrapper[4787]: I0126 19:21:34.469198 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-k8plr" event={"ID":"578a3da9-d799-446d-ae5e-41ab628669b9","Type":"ContainerStarted","Data":"2fb6ff745a90fee64a0f0eafebabf8f85004806c518e9451ae705efcafa7dc87"} Jan 26 19:21:34 crc kubenswrapper[4787]: I0126 19:21:34.469813 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:34 crc kubenswrapper[4787]: E0126 19:21:34.470820 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" Jan 26 19:21:34 crc kubenswrapper[4787]: I0126 19:21:34.490302 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-s566r" podStartSLOduration=13.490285063 podStartE2EDuration="13.490285063s" podCreationTimestamp="2026-01-26 19:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:21:34.486569262 +0000 UTC m=+5863.193705395" watchObservedRunningTime="2026-01-26 19:21:34.490285063 +0000 UTC m=+5863.197421196" Jan 26 19:21:34 crc kubenswrapper[4787]: I0126 19:21:34.515929 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-k8plr" podStartSLOduration=2.16497565 podStartE2EDuration="19.515470405s" podCreationTimestamp="2026-01-26 19:21:15 +0000 UTC" firstStartedPulling="2026-01-26 19:21:16.696558524 +0000 UTC m=+5845.403694657" lastFinishedPulling="2026-01-26 19:21:34.047053279 +0000 UTC m=+5862.754189412" observedRunningTime="2026-01-26 19:21:34.509884129 +0000 UTC m=+5863.217020262" watchObservedRunningTime="2026-01-26 19:21:34.515470405 +0000 UTC m=+5863.222606538" Jan 26 19:21:38 crc kubenswrapper[4787]: I0126 19:21:38.506520 4787 generic.go:334] "Generic (PLEG): container finished" podID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerID="36bd33021285315efbada5121525354e78ae4627f10b0413943aed5bf95820da" exitCode=0 Jan 26 19:21:38 crc kubenswrapper[4787]: I0126 19:21:38.506617 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-s566r" event={"ID":"3e153ee6-c310-414d-973f-13e5cd1b936c","Type":"ContainerDied","Data":"36bd33021285315efbada5121525354e78ae4627f10b0413943aed5bf95820da"} Jan 26 19:21:39 crc kubenswrapper[4787]: I0126 19:21:39.956835 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.077257 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data\") pod \"3e153ee6-c310-414d-973f-13e5cd1b936c\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.077376 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data-merged\") pod \"3e153ee6-c310-414d-973f-13e5cd1b936c\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.077641 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-scripts\") pod \"3e153ee6-c310-414d-973f-13e5cd1b936c\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.077662 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-combined-ca-bundle\") pod \"3e153ee6-c310-414d-973f-13e5cd1b936c\" (UID: \"3e153ee6-c310-414d-973f-13e5cd1b936c\") " Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.084000 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-scripts" (OuterVolumeSpecName: "scripts") pod "3e153ee6-c310-414d-973f-13e5cd1b936c" (UID: "3e153ee6-c310-414d-973f-13e5cd1b936c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.084414 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data" (OuterVolumeSpecName: "config-data") pod "3e153ee6-c310-414d-973f-13e5cd1b936c" (UID: "3e153ee6-c310-414d-973f-13e5cd1b936c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.103479 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "3e153ee6-c310-414d-973f-13e5cd1b936c" (UID: "3e153ee6-c310-414d-973f-13e5cd1b936c"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.124822 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e153ee6-c310-414d-973f-13e5cd1b936c" (UID: "3e153ee6-c310-414d-973f-13e5cd1b936c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.183206 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.183274 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.183297 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.183319 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3e153ee6-c310-414d-973f-13e5cd1b936c-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.527042 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-s566r" Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.527032 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-s566r" event={"ID":"3e153ee6-c310-414d-973f-13e5cd1b936c","Type":"ContainerDied","Data":"1ed3b11d745cee9cf3db69798402dea50dd12c2dd6665e0bd078500594ed634e"} Jan 26 19:21:40 crc kubenswrapper[4787]: I0126 19:21:40.527167 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed3b11d745cee9cf3db69798402dea50dd12c2dd6665e0bd078500594ed634e" Jan 26 19:21:46 crc kubenswrapper[4787]: I0126 19:21:46.015770 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-k8plr" Jan 26 19:21:49 crc kubenswrapper[4787]: I0126 19:21:49.613212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" event={"ID":"6278f881-4d92-46b7-afdc-9e4958beb386","Type":"ContainerStarted","Data":"63cfd6090398b9f4a8187f66c242ccd649ee5b826606dd4f3656afc4091a5f8d"} Jan 26 19:21:50 crc kubenswrapper[4787]: I0126 19:21:50.631920 4787 generic.go:334] "Generic (PLEG): container finished" podID="6278f881-4d92-46b7-afdc-9e4958beb386" containerID="63cfd6090398b9f4a8187f66c242ccd649ee5b826606dd4f3656afc4091a5f8d" exitCode=0 Jan 26 19:21:50 crc kubenswrapper[4787]: I0126 19:21:50.632013 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" event={"ID":"6278f881-4d92-46b7-afdc-9e4958beb386","Type":"ContainerDied","Data":"63cfd6090398b9f4a8187f66c242ccd649ee5b826606dd4f3656afc4091a5f8d"} Jan 26 19:21:51 crc kubenswrapper[4787]: I0126 19:21:51.645868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" event={"ID":"6278f881-4d92-46b7-afdc-9e4958beb386","Type":"ContainerStarted","Data":"97639d61e9dff965105db32e2743609be8156dc0be7b0200b0066575ebc53588"} Jan 26 19:22:14 crc kubenswrapper[4787]: I0126 19:22:14.081380 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" podStartSLOduration=26.875253319 podStartE2EDuration="58.081353992s" podCreationTimestamp="2026-01-26 19:21:16 +0000 UTC" firstStartedPulling="2026-01-26 19:21:17.078150229 +0000 UTC m=+5845.785286362" lastFinishedPulling="2026-01-26 19:21:48.284250902 +0000 UTC m=+5876.991387035" observedRunningTime="2026-01-26 19:21:51.674025984 +0000 UTC m=+5880.381162127" watchObservedRunningTime="2026-01-26 19:22:14.081353992 +0000 UTC m=+5902.788490125" Jan 26 19:22:14 crc kubenswrapper[4787]: I0126 19:22:14.089012 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-cd5xf"] Jan 26 19:22:14 crc kubenswrapper[4787]: I0126 19:22:14.089372 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" containerName="octavia-amphora-httpd" containerID="cri-o://97639d61e9dff965105db32e2743609be8156dc0be7b0200b0066575ebc53588" gracePeriod=30 Jan 26 19:22:14 crc kubenswrapper[4787]: I0126 19:22:14.859407 4787 generic.go:334] "Generic (PLEG): container finished" podID="6278f881-4d92-46b7-afdc-9e4958beb386" containerID="97639d61e9dff965105db32e2743609be8156dc0be7b0200b0066575ebc53588" exitCode=0 Jan 26 19:22:14 crc kubenswrapper[4787]: I0126 19:22:14.859459 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" event={"ID":"6278f881-4d92-46b7-afdc-9e4958beb386","Type":"ContainerDied","Data":"97639d61e9dff965105db32e2743609be8156dc0be7b0200b0066575ebc53588"} Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.297927 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.392526 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6278f881-4d92-46b7-afdc-9e4958beb386-httpd-config\") pod \"6278f881-4d92-46b7-afdc-9e4958beb386\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.392705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/6278f881-4d92-46b7-afdc-9e4958beb386-amphora-image\") pod \"6278f881-4d92-46b7-afdc-9e4958beb386\" (UID: \"6278f881-4d92-46b7-afdc-9e4958beb386\") " Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.421821 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6278f881-4d92-46b7-afdc-9e4958beb386-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6278f881-4d92-46b7-afdc-9e4958beb386" (UID: "6278f881-4d92-46b7-afdc-9e4958beb386"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.479185 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6278f881-4d92-46b7-afdc-9e4958beb386-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "6278f881-4d92-46b7-afdc-9e4958beb386" (UID: "6278f881-4d92-46b7-afdc-9e4958beb386"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.501513 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6278f881-4d92-46b7-afdc-9e4958beb386-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.501550 4787 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/6278f881-4d92-46b7-afdc-9e4958beb386-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.871138 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" event={"ID":"6278f881-4d92-46b7-afdc-9e4958beb386","Type":"ContainerDied","Data":"ad529c97c81d169ab300e9d8b58e8ab6b491864db53ddc4a66ce7e117579f726"} Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.871190 4787 scope.go:117] "RemoveContainer" containerID="97639d61e9dff965105db32e2743609be8156dc0be7b0200b0066575ebc53588" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.871200 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-cd5xf" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.905072 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-cd5xf"] Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.905824 4787 scope.go:117] "RemoveContainer" containerID="63cfd6090398b9f4a8187f66c242ccd649ee5b826606dd4f3656afc4091a5f8d" Jan 26 19:22:15 crc kubenswrapper[4787]: I0126 19:22:15.916698 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-cd5xf"] Jan 26 19:22:17 crc kubenswrapper[4787]: I0126 19:22:17.600899 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" path="/var/lib/kubelet/pods/6278f881-4d92-46b7-afdc-9e4958beb386/volumes" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.107420 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-dnb2k"] Jan 26 19:22:18 crc kubenswrapper[4787]: E0126 19:22:18.107885 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerName="octavia-db-sync" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.107906 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerName="octavia-db-sync" Jan 26 19:22:18 crc kubenswrapper[4787]: E0126 19:22:18.107920 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" containerName="init" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.107926 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" containerName="init" Jan 26 19:22:18 crc kubenswrapper[4787]: E0126 19:22:18.107938 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerName="init" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.107961 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerName="init" Jan 26 19:22:18 crc kubenswrapper[4787]: E0126 19:22:18.107979 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" containerName="octavia-amphora-httpd" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.107987 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" containerName="octavia-amphora-httpd" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.108166 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6278f881-4d92-46b7-afdc-9e4958beb386" containerName="octavia-amphora-httpd" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.108188 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e153ee6-c310-414d-973f-13e5cd1b936c" containerName="octavia-db-sync" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.109187 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.112262 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.117249 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-dnb2k"] Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.252284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/596da395-e80c-4fbe-bd79-00dbcb170095-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-dnb2k\" (UID: \"596da395-e80c-4fbe-bd79-00dbcb170095\") " pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.252416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/596da395-e80c-4fbe-bd79-00dbcb170095-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-dnb2k\" (UID: \"596da395-e80c-4fbe-bd79-00dbcb170095\") " pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.355147 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/596da395-e80c-4fbe-bd79-00dbcb170095-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-dnb2k\" (UID: \"596da395-e80c-4fbe-bd79-00dbcb170095\") " pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.355214 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/596da395-e80c-4fbe-bd79-00dbcb170095-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-dnb2k\" (UID: \"596da395-e80c-4fbe-bd79-00dbcb170095\") " pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.356460 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/596da395-e80c-4fbe-bd79-00dbcb170095-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-dnb2k\" (UID: \"596da395-e80c-4fbe-bd79-00dbcb170095\") " pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.360816 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/596da395-e80c-4fbe-bd79-00dbcb170095-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-dnb2k\" (UID: \"596da395-e80c-4fbe-bd79-00dbcb170095\") " pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.438844 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" Jan 26 19:22:18 crc kubenswrapper[4787]: I0126 19:22:18.901373 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-dnb2k"] Jan 26 19:22:18 crc kubenswrapper[4787]: W0126 19:22:18.915095 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod596da395_e80c_4fbe_bd79_00dbcb170095.slice/crio-5a3de340a64d212b46128223e6b6dd7dbfd9acddcb154de93cfcef02da305fb2 WatchSource:0}: Error finding container 5a3de340a64d212b46128223e6b6dd7dbfd9acddcb154de93cfcef02da305fb2: Status 404 returned error can't find the container with id 5a3de340a64d212b46128223e6b6dd7dbfd9acddcb154de93cfcef02da305fb2 Jan 26 19:22:19 crc kubenswrapper[4787]: I0126 19:22:19.910370 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" event={"ID":"596da395-e80c-4fbe-bd79-00dbcb170095","Type":"ContainerStarted","Data":"a247815160f194353ed7e60a943f7248c28c0071d8400bfe09509179a4fa5875"} Jan 26 19:22:19 crc kubenswrapper[4787]: I0126 19:22:19.910923 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" event={"ID":"596da395-e80c-4fbe-bd79-00dbcb170095","Type":"ContainerStarted","Data":"5a3de340a64d212b46128223e6b6dd7dbfd9acddcb154de93cfcef02da305fb2"} Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.054513 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-ffs52"] Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.056664 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.059073 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.059601 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.060427 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.071022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-ffs52"] Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.195594 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8577bfe8-3f32-4374-8234-be0dd1530414-hm-ports\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.195664 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-combined-ca-bundle\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.195693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-scripts\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.195720 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-amphora-certs\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.196021 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8577bfe8-3f32-4374-8234-be0dd1530414-config-data-merged\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.196277 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-config-data\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.298305 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8577bfe8-3f32-4374-8234-be0dd1530414-hm-ports\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.298398 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-combined-ca-bundle\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.298428 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-scripts\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.298463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-amphora-certs\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.298538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8577bfe8-3f32-4374-8234-be0dd1530414-config-data-merged\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.298638 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-config-data\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.299298 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8577bfe8-3f32-4374-8234-be0dd1530414-config-data-merged\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.300204 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8577bfe8-3f32-4374-8234-be0dd1530414-hm-ports\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.305888 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-scripts\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.305888 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-config-data\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.305913 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-combined-ca-bundle\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.319926 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8577bfe8-3f32-4374-8234-be0dd1530414-amphora-certs\") pod \"octavia-healthmanager-ffs52\" (UID: \"8577bfe8-3f32-4374-8234-be0dd1530414\") " pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.377551 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:20 crc kubenswrapper[4787]: I0126 19:22:20.957551 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-ffs52"] Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.791224 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-gj49t"] Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.793256 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.795419 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.798250 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.803650 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-gj49t"] Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.935758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-config-data\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.935806 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-amphora-certs\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.935875 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-combined-ca-bundle\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.935938 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/33a7c869-71e5-4f17-9a09-ad53a1f02519-hm-ports\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.936157 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/33a7c869-71e5-4f17-9a09-ad53a1f02519-config-data-merged\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.936190 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-scripts\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.955150 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-ffs52" event={"ID":"8577bfe8-3f32-4374-8234-be0dd1530414","Type":"ContainerStarted","Data":"4240f5014248fab72c54c4e8c89e30b86d499c02f2e5925dc637b80c6074c257"} Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.955201 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-ffs52" event={"ID":"8577bfe8-3f32-4374-8234-be0dd1530414","Type":"ContainerStarted","Data":"8134966db78433e79fb267648b180f0e78a8e86a79dc5ff0a0417abd99b06c1e"} Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.960523 4787 generic.go:334] "Generic (PLEG): container finished" podID="596da395-e80c-4fbe-bd79-00dbcb170095" containerID="a247815160f194353ed7e60a943f7248c28c0071d8400bfe09509179a4fa5875" exitCode=0 Jan 26 19:22:21 crc kubenswrapper[4787]: I0126 19:22:21.960567 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" event={"ID":"596da395-e80c-4fbe-bd79-00dbcb170095","Type":"ContainerDied","Data":"a247815160f194353ed7e60a943f7248c28c0071d8400bfe09509179a4fa5875"} Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.041853 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/33a7c869-71e5-4f17-9a09-ad53a1f02519-config-data-merged\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.041915 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-scripts\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.042028 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-config-data\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.042047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-amphora-certs\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.042120 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-combined-ca-bundle\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.042166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/33a7c869-71e5-4f17-9a09-ad53a1f02519-hm-ports\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.049717 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-scripts\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.050046 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/33a7c869-71e5-4f17-9a09-ad53a1f02519-config-data-merged\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.050387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-amphora-certs\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.056025 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/33a7c869-71e5-4f17-9a09-ad53a1f02519-hm-ports\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.059401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-combined-ca-bundle\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.059417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a7c869-71e5-4f17-9a09-ad53a1f02519-config-data\") pod \"octavia-housekeeping-gj49t\" (UID: \"33a7c869-71e5-4f17-9a09-ad53a1f02519\") " pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.111803 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.716727 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-gj49t"] Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.991165 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" event={"ID":"596da395-e80c-4fbe-bd79-00dbcb170095","Type":"ContainerStarted","Data":"a7ef90db592b91804bfb115c2ed6a49a0d1e0408adf223cf20d8627595852d38"} Jan 26 19:22:22 crc kubenswrapper[4787]: I0126 19:22:22.992355 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-gj49t" event={"ID":"33a7c869-71e5-4f17-9a09-ad53a1f02519","Type":"ContainerStarted","Data":"9afedaa39625d93a0ecc05a85d4b1c764547676095855b3beaf77783f96066ee"} Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.011853 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-dnb2k" podStartSLOduration=4.528261906 podStartE2EDuration="5.011838429s" podCreationTimestamp="2026-01-26 19:22:18 +0000 UTC" firstStartedPulling="2026-01-26 19:22:18.919298335 +0000 UTC m=+5907.626434468" lastFinishedPulling="2026-01-26 19:22:19.402874858 +0000 UTC m=+5908.110010991" observedRunningTime="2026-01-26 19:22:23.010307052 +0000 UTC m=+5911.717443185" watchObservedRunningTime="2026-01-26 19:22:23.011838429 +0000 UTC m=+5911.718974572" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.536894 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-vhgdv"] Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.538990 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.541853 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.542020 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.551290 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-vhgdv"] Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.690731 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-combined-ca-bundle\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.691052 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-scripts\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.691412 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-config-data-merged\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.691497 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-amphora-certs\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.691810 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-config-data\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.691851 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-hm-ports\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.793992 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-combined-ca-bundle\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.794099 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-scripts\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.794127 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-config-data-merged\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.794166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-amphora-certs\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.794205 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-config-data\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.794222 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-hm-ports\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.794866 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-config-data-merged\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.795437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-hm-ports\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.800023 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-amphora-certs\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.800778 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-config-data\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.801269 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-scripts\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.801636 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82ebb82-ace0-4d7f-8535-3533fa78f9d2-combined-ca-bundle\") pod \"octavia-worker-vhgdv\" (UID: \"e82ebb82-ace0-4d7f-8535-3533fa78f9d2\") " pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:23 crc kubenswrapper[4787]: I0126 19:22:23.862583 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:24 crc kubenswrapper[4787]: I0126 19:22:24.003724 4787 generic.go:334] "Generic (PLEG): container finished" podID="8577bfe8-3f32-4374-8234-be0dd1530414" containerID="4240f5014248fab72c54c4e8c89e30b86d499c02f2e5925dc637b80c6074c257" exitCode=0 Jan 26 19:22:24 crc kubenswrapper[4787]: I0126 19:22:24.003786 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-ffs52" event={"ID":"8577bfe8-3f32-4374-8234-be0dd1530414","Type":"ContainerDied","Data":"4240f5014248fab72c54c4e8c89e30b86d499c02f2e5925dc637b80c6074c257"} Jan 26 19:22:24 crc kubenswrapper[4787]: I0126 19:22:24.632376 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-vhgdv"] Jan 26 19:22:24 crc kubenswrapper[4787]: W0126 19:22:24.640551 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode82ebb82_ace0_4d7f_8535_3533fa78f9d2.slice/crio-bbdf05c0bbc6399ee858f7cb5b447a1969e48a33067a2818cf10a65f81ec2083 WatchSource:0}: Error finding container bbdf05c0bbc6399ee858f7cb5b447a1969e48a33067a2818cf10a65f81ec2083: Status 404 returned error can't find the container with id bbdf05c0bbc6399ee858f7cb5b447a1969e48a33067a2818cf10a65f81ec2083 Jan 26 19:22:25 crc kubenswrapper[4787]: I0126 19:22:25.017648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-gj49t" event={"ID":"33a7c869-71e5-4f17-9a09-ad53a1f02519","Type":"ContainerStarted","Data":"8474533014f5e708542f1e54f34f160dc381bb92e8339e991c87b0a37733c80d"} Jan 26 19:22:25 crc kubenswrapper[4787]: I0126 19:22:25.021321 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-ffs52" event={"ID":"8577bfe8-3f32-4374-8234-be0dd1530414","Type":"ContainerStarted","Data":"e8854260647ade2337271117274d5bfd803b054073923249d6af2e2510c87f65"} Jan 26 19:22:25 crc kubenswrapper[4787]: I0126 19:22:25.022228 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:25 crc kubenswrapper[4787]: I0126 19:22:25.025813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vhgdv" event={"ID":"e82ebb82-ace0-4d7f-8535-3533fa78f9d2","Type":"ContainerStarted","Data":"bbdf05c0bbc6399ee858f7cb5b447a1969e48a33067a2818cf10a65f81ec2083"} Jan 26 19:22:25 crc kubenswrapper[4787]: I0126 19:22:25.079630 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-ffs52" podStartSLOduration=5.079608699 podStartE2EDuration="5.079608699s" podCreationTimestamp="2026-01-26 19:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:22:25.073915429 +0000 UTC m=+5913.781051562" watchObservedRunningTime="2026-01-26 19:22:25.079608699 +0000 UTC m=+5913.786744832" Jan 26 19:22:25 crc kubenswrapper[4787]: I0126 19:22:25.148302 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-ffs52"] Jan 26 19:22:27 crc kubenswrapper[4787]: I0126 19:22:27.057758 4787 generic.go:334] "Generic (PLEG): container finished" podID="33a7c869-71e5-4f17-9a09-ad53a1f02519" containerID="8474533014f5e708542f1e54f34f160dc381bb92e8339e991c87b0a37733c80d" exitCode=0 Jan 26 19:22:27 crc kubenswrapper[4787]: I0126 19:22:27.057833 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-gj49t" event={"ID":"33a7c869-71e5-4f17-9a09-ad53a1f02519","Type":"ContainerDied","Data":"8474533014f5e708542f1e54f34f160dc381bb92e8339e991c87b0a37733c80d"} Jan 26 19:22:28 crc kubenswrapper[4787]: I0126 19:22:28.068961 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vhgdv" event={"ID":"e82ebb82-ace0-4d7f-8535-3533fa78f9d2","Type":"ContainerStarted","Data":"538e8afe703e42e244ae9a6d90f1f892d056a457e547a12f199b4e42df6a3c58"} Jan 26 19:22:28 crc kubenswrapper[4787]: I0126 19:22:28.072700 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-gj49t" event={"ID":"33a7c869-71e5-4f17-9a09-ad53a1f02519","Type":"ContainerStarted","Data":"ec45f7e1f91754cbf31fc357d27a36281a57859e4bd41a4f743d11db4d4bee37"} Jan 26 19:22:28 crc kubenswrapper[4787]: I0126 19:22:28.073022 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:28 crc kubenswrapper[4787]: I0126 19:22:28.118572 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-gj49t" podStartSLOduration=5.602117595 podStartE2EDuration="7.118552614s" podCreationTimestamp="2026-01-26 19:22:21 +0000 UTC" firstStartedPulling="2026-01-26 19:22:22.724000292 +0000 UTC m=+5911.431136425" lastFinishedPulling="2026-01-26 19:22:24.240435311 +0000 UTC m=+5912.947571444" observedRunningTime="2026-01-26 19:22:28.10729199 +0000 UTC m=+5916.814428123" watchObservedRunningTime="2026-01-26 19:22:28.118552614 +0000 UTC m=+5916.825688757" Jan 26 19:22:29 crc kubenswrapper[4787]: I0126 19:22:29.083614 4787 generic.go:334] "Generic (PLEG): container finished" podID="e82ebb82-ace0-4d7f-8535-3533fa78f9d2" containerID="538e8afe703e42e244ae9a6d90f1f892d056a457e547a12f199b4e42df6a3c58" exitCode=0 Jan 26 19:22:29 crc kubenswrapper[4787]: I0126 19:22:29.084308 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vhgdv" event={"ID":"e82ebb82-ace0-4d7f-8535-3533fa78f9d2","Type":"ContainerDied","Data":"538e8afe703e42e244ae9a6d90f1f892d056a457e547a12f199b4e42df6a3c58"} Jan 26 19:22:32 crc kubenswrapper[4787]: I0126 19:22:32.111444 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vhgdv" event={"ID":"e82ebb82-ace0-4d7f-8535-3533fa78f9d2","Type":"ContainerStarted","Data":"635d73c35971fa7c0aa385640187bc6042cae85c1bc0caf63950481a920b5e4f"} Jan 26 19:22:32 crc kubenswrapper[4787]: I0126 19:22:32.111991 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:32 crc kubenswrapper[4787]: I0126 19:22:32.140003 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-vhgdv" podStartSLOduration=6.472153594 podStartE2EDuration="9.139963859s" podCreationTimestamp="2026-01-26 19:22:23 +0000 UTC" firstStartedPulling="2026-01-26 19:22:24.643616641 +0000 UTC m=+5913.350752774" lastFinishedPulling="2026-01-26 19:22:27.311426906 +0000 UTC m=+5916.018563039" observedRunningTime="2026-01-26 19:22:32.130470808 +0000 UTC m=+5920.837606941" watchObservedRunningTime="2026-01-26 19:22:32.139963859 +0000 UTC m=+5920.847100002" Jan 26 19:22:35 crc kubenswrapper[4787]: I0126 19:22:35.409350 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-ffs52" Jan 26 19:22:37 crc kubenswrapper[4787]: I0126 19:22:37.145693 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-gj49t" Jan 26 19:22:38 crc kubenswrapper[4787]: I0126 19:22:38.892838 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-vhgdv" Jan 26 19:22:46 crc kubenswrapper[4787]: I0126 19:22:46.808347 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:22:46 crc kubenswrapper[4787]: I0126 19:22:46.809390 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:22:59 crc kubenswrapper[4787]: I0126 19:22:59.477516 4787 scope.go:117] "RemoveContainer" containerID="1dbc676ddae94b10b05e619b686a758abb15598db3cc864a59c00e281fab5e5d" Jan 26 19:23:00 crc kubenswrapper[4787]: I0126 19:23:00.065998 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a770-account-create-update-ptgbv"] Jan 26 19:23:00 crc kubenswrapper[4787]: I0126 19:23:00.079534 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d4v68"] Jan 26 19:23:00 crc kubenswrapper[4787]: I0126 19:23:00.090029 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a770-account-create-update-ptgbv"] Jan 26 19:23:00 crc kubenswrapper[4787]: I0126 19:23:00.108465 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d4v68"] Jan 26 19:23:01 crc kubenswrapper[4787]: I0126 19:23:01.606469 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465cd4b3-ac1f-4e89-b560-6e40d2754fe3" path="/var/lib/kubelet/pods/465cd4b3-ac1f-4e89-b560-6e40d2754fe3/volumes" Jan 26 19:23:01 crc kubenswrapper[4787]: I0126 19:23:01.607467 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84d8fb0-d826-48fe-9661-68f05468a9f2" path="/var/lib/kubelet/pods/e84d8fb0-d826-48fe-9661-68f05468a9f2/volumes" Jan 26 19:23:09 crc kubenswrapper[4787]: I0126 19:23:09.038350 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pmknk"] Jan 26 19:23:09 crc kubenswrapper[4787]: I0126 19:23:09.049164 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pmknk"] Jan 26 19:23:09 crc kubenswrapper[4787]: I0126 19:23:09.601223 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad2d4cef-d169-474d-a245-7e489c22cb6c" path="/var/lib/kubelet/pods/ad2d4cef-d169-474d-a245-7e489c22cb6c/volumes" Jan 26 19:23:16 crc kubenswrapper[4787]: I0126 19:23:16.808062 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:23:16 crc kubenswrapper[4787]: I0126 19:23:16.808657 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.831006 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79b7d7c77c-9sjzp"] Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.833256 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.835448 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.835849 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vlp7d" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.835914 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.836090 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.854585 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd03098-9c84-4642-86a7-4e6638732177-logs\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.854703 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-scripts\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.854905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dcd03098-9c84-4642-86a7-4e6638732177-horizon-secret-key\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.854978 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-config-data\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.855096 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k49xk\" (UniqueName: \"kubernetes.io/projected/dcd03098-9c84-4642-86a7-4e6638732177-kube-api-access-k49xk\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.864764 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79b7d7c77c-9sjzp"] Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.938625 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.938857 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-log" containerID="cri-o://0ba1c0f3ed40d9d48536b6d37513a39dd74bba05e8835b6ea81f4ee5c414848c" gracePeriod=30 Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.939333 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-httpd" containerID="cri-o://81fc3b27bd24132d96a57783d5b94412cf5088038bbfb60ca3b9d698d4fba316" gracePeriod=30 Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.955734 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cc8dc8649-pzbh7"] Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.956907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dcd03098-9c84-4642-86a7-4e6638732177-horizon-secret-key\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.956987 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-config-data\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.957072 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k49xk\" (UniqueName: \"kubernetes.io/projected/dcd03098-9c84-4642-86a7-4e6638732177-kube-api-access-k49xk\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.957104 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd03098-9c84-4642-86a7-4e6638732177-logs\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.957163 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-scripts\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.957842 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.958508 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-scripts\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.961061 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd03098-9c84-4642-86a7-4e6638732177-logs\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.961841 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-config-data\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.966940 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc8dc8649-pzbh7"] Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.972592 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dcd03098-9c84-4642-86a7-4e6638732177-horizon-secret-key\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:28 crc kubenswrapper[4787]: I0126 19:23:28.982341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k49xk\" (UniqueName: \"kubernetes.io/projected/dcd03098-9c84-4642-86a7-4e6638732177-kube-api-access-k49xk\") pod \"horizon-79b7d7c77c-9sjzp\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.036612 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.036892 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-log" containerID="cri-o://2677c97a2a2d38fe8afa249f8228504b8768f1056b0ac0a0c70a415c11870909" gracePeriod=30 Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.036979 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-httpd" containerID="cri-o://2ad6023bdccb28717ea07e7d21c98782767a45516ce92c950c3ac4ab40753e28" gracePeriod=30 Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.059202 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9h76\" (UniqueName: \"kubernetes.io/projected/921ee18e-4ea9-4270-9f29-515a05ca5eff-kube-api-access-n9h76\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.059287 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-config-data\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.059322 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/921ee18e-4ea9-4270-9f29-515a05ca5eff-horizon-secret-key\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.059374 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-scripts\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.059512 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921ee18e-4ea9-4270-9f29-515a05ca5eff-logs\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.154401 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.161117 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9h76\" (UniqueName: \"kubernetes.io/projected/921ee18e-4ea9-4270-9f29-515a05ca5eff-kube-api-access-n9h76\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.161196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-config-data\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.161221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/921ee18e-4ea9-4270-9f29-515a05ca5eff-horizon-secret-key\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.161261 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-scripts\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.161374 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921ee18e-4ea9-4270-9f29-515a05ca5eff-logs\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.162368 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-scripts\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.162484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-config-data\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.162616 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921ee18e-4ea9-4270-9f29-515a05ca5eff-logs\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.165822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/921ee18e-4ea9-4270-9f29-515a05ca5eff-horizon-secret-key\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.182146 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9h76\" (UniqueName: \"kubernetes.io/projected/921ee18e-4ea9-4270-9f29-515a05ca5eff-kube-api-access-n9h76\") pod \"horizon-cc8dc8649-pzbh7\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.429297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.634119 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79b7d7c77c-9sjzp"] Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.670548 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79b7d7c77c-9sjzp"] Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.685588 4787 generic.go:334] "Generic (PLEG): container finished" podID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerID="0ba1c0f3ed40d9d48536b6d37513a39dd74bba05e8835b6ea81f4ee5c414848c" exitCode=143 Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.685716 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0bf8e19f-5564-406a-82ac-27fe75ba40c0","Type":"ContainerDied","Data":"0ba1c0f3ed40d9d48536b6d37513a39dd74bba05e8835b6ea81f4ee5c414848c"} Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.689196 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-655fbc65c-zp8cb"] Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.693075 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.694983 4787 generic.go:334] "Generic (PLEG): container finished" podID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerID="2677c97a2a2d38fe8afa249f8228504b8768f1056b0ac0a0c70a415c11870909" exitCode=143 Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.695017 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"330b30cb-11c3-4d78-aa27-a522b770aa67","Type":"ContainerDied","Data":"2677c97a2a2d38fe8afa249f8228504b8768f1056b0ac0a0c70a415c11870909"} Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.699539 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.706194 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-655fbc65c-zp8cb"] Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.776766 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-horizon-secret-key\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.776900 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-config-data\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.776930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-scripts\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.776977 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-logs\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.777029 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xp48\" (UniqueName: \"kubernetes.io/projected/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-kube-api-access-9xp48\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.878751 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-horizon-secret-key\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.878923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-config-data\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.878973 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-scripts\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.878999 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-logs\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.879046 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xp48\" (UniqueName: \"kubernetes.io/projected/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-kube-api-access-9xp48\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.879734 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-logs\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.880265 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-scripts\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.880433 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-config-data\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.884666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-horizon-secret-key\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.894606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xp48\" (UniqueName: \"kubernetes.io/projected/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-kube-api-access-9xp48\") pod \"horizon-655fbc65c-zp8cb\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:29 crc kubenswrapper[4787]: I0126 19:23:29.939331 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc8dc8649-pzbh7"] Jan 26 19:23:29 crc kubenswrapper[4787]: W0126 19:23:29.946807 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod921ee18e_4ea9_4270_9f29_515a05ca5eff.slice/crio-a54e8edf83fa0651a627840bfb6d16484f9a2490cc0a862a1f8d4091fde323ac WatchSource:0}: Error finding container a54e8edf83fa0651a627840bfb6d16484f9a2490cc0a862a1f8d4091fde323ac: Status 404 returned error can't find the container with id a54e8edf83fa0651a627840bfb6d16484f9a2490cc0a862a1f8d4091fde323ac Jan 26 19:23:30 crc kubenswrapper[4787]: I0126 19:23:30.051966 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:30 crc kubenswrapper[4787]: I0126 19:23:30.511731 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-655fbc65c-zp8cb"] Jan 26 19:23:30 crc kubenswrapper[4787]: W0126 19:23:30.522062 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceef0fd6_c0ab_4ec6_b1e6_3065a9e392fc.slice/crio-d25e84e145ec5c2df676c4281eb16c0de4db7eaa436a4cc37a6da2fd3c628dd8 WatchSource:0}: Error finding container d25e84e145ec5c2df676c4281eb16c0de4db7eaa436a4cc37a6da2fd3c628dd8: Status 404 returned error can't find the container with id d25e84e145ec5c2df676c4281eb16c0de4db7eaa436a4cc37a6da2fd3c628dd8 Jan 26 19:23:30 crc kubenswrapper[4787]: I0126 19:23:30.705199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc8dc8649-pzbh7" event={"ID":"921ee18e-4ea9-4270-9f29-515a05ca5eff","Type":"ContainerStarted","Data":"a54e8edf83fa0651a627840bfb6d16484f9a2490cc0a862a1f8d4091fde323ac"} Jan 26 19:23:30 crc kubenswrapper[4787]: I0126 19:23:30.706427 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79b7d7c77c-9sjzp" event={"ID":"dcd03098-9c84-4642-86a7-4e6638732177","Type":"ContainerStarted","Data":"00f7c8699e205af2b3ca887267ee98101baf753de81d4109f31051ce7a4a620a"} Jan 26 19:23:30 crc kubenswrapper[4787]: I0126 19:23:30.710078 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655fbc65c-zp8cb" event={"ID":"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc","Type":"ContainerStarted","Data":"d25e84e145ec5c2df676c4281eb16c0de4db7eaa436a4cc37a6da2fd3c628dd8"} Jan 26 19:23:32 crc kubenswrapper[4787]: I0126 19:23:32.729906 4787 generic.go:334] "Generic (PLEG): container finished" podID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerID="2ad6023bdccb28717ea07e7d21c98782767a45516ce92c950c3ac4ab40753e28" exitCode=0 Jan 26 19:23:32 crc kubenswrapper[4787]: I0126 19:23:32.730017 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"330b30cb-11c3-4d78-aa27-a522b770aa67","Type":"ContainerDied","Data":"2ad6023bdccb28717ea07e7d21c98782767a45516ce92c950c3ac4ab40753e28"} Jan 26 19:23:32 crc kubenswrapper[4787]: I0126 19:23:32.734261 4787 generic.go:334] "Generic (PLEG): container finished" podID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerID="81fc3b27bd24132d96a57783d5b94412cf5088038bbfb60ca3b9d698d4fba316" exitCode=0 Jan 26 19:23:32 crc kubenswrapper[4787]: I0126 19:23:32.734330 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0bf8e19f-5564-406a-82ac-27fe75ba40c0","Type":"ContainerDied","Data":"81fc3b27bd24132d96a57783d5b94412cf5088038bbfb60ca3b9d698d4fba316"} Jan 26 19:23:34 crc kubenswrapper[4787]: I0126 19:23:34.029629 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-19ee-account-create-update-9v6ph"] Jan 26 19:23:34 crc kubenswrapper[4787]: I0126 19:23:34.041571 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-19ee-account-create-update-9v6ph"] Jan 26 19:23:35 crc kubenswrapper[4787]: I0126 19:23:35.028216 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xzphj"] Jan 26 19:23:35 crc kubenswrapper[4787]: I0126 19:23:35.038405 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xzphj"] Jan 26 19:23:35 crc kubenswrapper[4787]: I0126 19:23:35.602282 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60471611-d3d3-447e-8640-aec193ed80ba" path="/var/lib/kubelet/pods/60471611-d3d3-447e-8640-aec193ed80ba/volumes" Jan 26 19:23:35 crc kubenswrapper[4787]: I0126 19:23:35.603172 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e490a536-2166-4223-813b-112656901c59" path="/var/lib/kubelet/pods/e490a536-2166-4223-813b-112656901c59/volumes" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.775205 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.781553 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.782013 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"330b30cb-11c3-4d78-aa27-a522b770aa67","Type":"ContainerDied","Data":"2d8dd416aa3dc08bc6a30a618e368853e784e3d5f928b3441a66310df0353701"} Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.782071 4787 scope.go:117] "RemoveContainer" containerID="2ad6023bdccb28717ea07e7d21c98782767a45516ce92c950c3ac4ab40753e28" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.786627 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc8dc8649-pzbh7" event={"ID":"921ee18e-4ea9-4270-9f29-515a05ca5eff","Type":"ContainerStarted","Data":"1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e"} Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.788527 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79b7d7c77c-9sjzp" event={"ID":"dcd03098-9c84-4642-86a7-4e6638732177","Type":"ContainerStarted","Data":"71819a67a1596edcc2cdd280f262f389162bed2121b2c18d9b90562f52120650"} Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.833080 4787 scope.go:117] "RemoveContainer" containerID="2677c97a2a2d38fe8afa249f8228504b8768f1056b0ac0a0c70a415c11870909" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.835739 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925088 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9lg\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-kube-api-access-dc9lg\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925265 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-scripts\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925362 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-config-data\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925489 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-logs\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925600 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rtpn\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-kube-api-access-4rtpn\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925850 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-combined-ca-bundle\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925913 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-httpd-run\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925969 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-config-data\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.925993 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-httpd-run\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.926086 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-ceph\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.926179 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-scripts\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.926266 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-logs\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.926325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-ceph\") pod \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\" (UID: \"0bf8e19f-5564-406a-82ac-27fe75ba40c0\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.926475 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-combined-ca-bundle\") pod \"330b30cb-11c3-4d78-aa27-a522b770aa67\" (UID: \"330b30cb-11c3-4d78-aa27-a522b770aa67\") " Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.928789 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.928850 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-logs" (OuterVolumeSpecName: "logs") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.929081 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.933553 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-logs" (OuterVolumeSpecName: "logs") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.936480 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-kube-api-access-dc9lg" (OuterVolumeSpecName: "kube-api-access-dc9lg") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "kube-api-access-dc9lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.938524 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-scripts" (OuterVolumeSpecName: "scripts") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.946338 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-ceph" (OuterVolumeSpecName: "ceph") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.954503 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-ceph" (OuterVolumeSpecName: "ceph") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.954599 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-scripts" (OuterVolumeSpecName: "scripts") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:36 crc kubenswrapper[4787]: I0126 19:23:36.954645 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-kube-api-access-4rtpn" (OuterVolumeSpecName: "kube-api-access-4rtpn") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "kube-api-access-4rtpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029177 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029532 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029545 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029560 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029571 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330b30cb-11c3-4d78-aa27-a522b770aa67-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029582 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029593 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc9lg\" (UniqueName: \"kubernetes.io/projected/0bf8e19f-5564-406a-82ac-27fe75ba40c0-kube-api-access-dc9lg\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029604 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029614 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf8e19f-5564-406a-82ac-27fe75ba40c0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.029624 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rtpn\" (UniqueName: \"kubernetes.io/projected/330b30cb-11c3-4d78-aa27-a522b770aa67-kube-api-access-4rtpn\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.036366 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.107147 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.129110 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-config-data" (OuterVolumeSpecName: "config-data") pod "0bf8e19f-5564-406a-82ac-27fe75ba40c0" (UID: "0bf8e19f-5564-406a-82ac-27fe75ba40c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.132362 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.132752 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8e19f-5564-406a-82ac-27fe75ba40c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.132793 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.133828 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-config-data" (OuterVolumeSpecName: "config-data") pod "330b30cb-11c3-4d78-aa27-a522b770aa67" (UID: "330b30cb-11c3-4d78-aa27-a522b770aa67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.235016 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330b30cb-11c3-4d78-aa27-a522b770aa67-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.428567 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.445814 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.456214 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:23:37 crc kubenswrapper[4787]: E0126 19:23:37.456699 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-log" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.456724 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-log" Jan 26 19:23:37 crc kubenswrapper[4787]: E0126 19:23:37.456751 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-httpd" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.456761 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-httpd" Jan 26 19:23:37 crc kubenswrapper[4787]: E0126 19:23:37.456782 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-httpd" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.456790 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-httpd" Jan 26 19:23:37 crc kubenswrapper[4787]: E0126 19:23:37.456823 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-log" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.456852 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-log" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.457123 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-httpd" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.457142 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-log" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.457160 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" containerName="glance-httpd" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.457193 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" containerName="glance-log" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.458606 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.461036 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.475645 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.605443 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330b30cb-11c3-4d78-aa27-a522b770aa67" path="/var/lib/kubelet/pods/330b30cb-11c3-4d78-aa27-a522b770aa67/volumes" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.644894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b22e06a-773a-4fc9-891b-631713f4de49-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.645044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.645265 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b22e06a-773a-4fc9-891b-631713f4de49-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.645485 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.645647 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b22e06a-773a-4fc9-891b-631713f4de49-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.645688 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbhdn\" (UniqueName: \"kubernetes.io/projected/7b22e06a-773a-4fc9-891b-631713f4de49-kube-api-access-vbhdn\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.645738 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.747836 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b22e06a-773a-4fc9-891b-631713f4de49-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.747999 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.748076 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b22e06a-773a-4fc9-891b-631713f4de49-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.748106 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbhdn\" (UniqueName: \"kubernetes.io/projected/7b22e06a-773a-4fc9-891b-631713f4de49-kube-api-access-vbhdn\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.748139 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.748203 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b22e06a-773a-4fc9-891b-631713f4de49-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.748225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.748438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b22e06a-773a-4fc9-891b-631713f4de49-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.749504 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b22e06a-773a-4fc9-891b-631713f4de49-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.752018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.752515 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.753267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b22e06a-773a-4fc9-891b-631713f4de49-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.753614 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b22e06a-773a-4fc9-891b-631713f4de49-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.769426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbhdn\" (UniqueName: \"kubernetes.io/projected/7b22e06a-773a-4fc9-891b-631713f4de49-kube-api-access-vbhdn\") pod \"glance-default-internal-api-0\" (UID: \"7b22e06a-773a-4fc9-891b-631713f4de49\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.793803 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.802597 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655fbc65c-zp8cb" event={"ID":"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc","Type":"ContainerStarted","Data":"3132cbb1395416c0894b5e772e4c43286622faf505f5fc9254937c3e38300c6b"} Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.803063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655fbc65c-zp8cb" event={"ID":"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc","Type":"ContainerStarted","Data":"0f9ab83d2349a71967960d1673e6245a78af58a932df3ba9ae5b78f7c4957451"} Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.815589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc8dc8649-pzbh7" event={"ID":"921ee18e-4ea9-4270-9f29-515a05ca5eff","Type":"ContainerStarted","Data":"c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3"} Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.825291 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-655fbc65c-zp8cb" podStartSLOduration=2.9241680370000003 podStartE2EDuration="8.82526766s" podCreationTimestamp="2026-01-26 19:23:29 +0000 UTC" firstStartedPulling="2026-01-26 19:23:30.524447066 +0000 UTC m=+5979.231583189" lastFinishedPulling="2026-01-26 19:23:36.425546679 +0000 UTC m=+5985.132682812" observedRunningTime="2026-01-26 19:23:37.822481602 +0000 UTC m=+5986.529617735" watchObservedRunningTime="2026-01-26 19:23:37.82526766 +0000 UTC m=+5986.532403793" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.827286 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79b7d7c77c-9sjzp" event={"ID":"dcd03098-9c84-4642-86a7-4e6638732177","Type":"ContainerStarted","Data":"4c928bb5cbc15310761c595d8079895a01da4415c602ab64d2a4a209bb3ea367"} Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.827469 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79b7d7c77c-9sjzp" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon-log" containerID="cri-o://71819a67a1596edcc2cdd280f262f389162bed2121b2c18d9b90562f52120650" gracePeriod=30 Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.827499 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79b7d7c77c-9sjzp" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon" containerID="cri-o://4c928bb5cbc15310761c595d8079895a01da4415c602ab64d2a4a209bb3ea367" gracePeriod=30 Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.836774 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0bf8e19f-5564-406a-82ac-27fe75ba40c0","Type":"ContainerDied","Data":"2a0ceae02f8852c2bffaca921a746644f9fb15a8969d40b0bdb515ce228c75cd"} Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.836823 4787 scope.go:117] "RemoveContainer" containerID="81fc3b27bd24132d96a57783d5b94412cf5088038bbfb60ca3b9d698d4fba316" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.837010 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.886514 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cc8dc8649-pzbh7" podStartSLOduration=3.460307442 podStartE2EDuration="9.886491028s" podCreationTimestamp="2026-01-26 19:23:28 +0000 UTC" firstStartedPulling="2026-01-26 19:23:29.949538232 +0000 UTC m=+5978.656674375" lastFinishedPulling="2026-01-26 19:23:36.375721828 +0000 UTC m=+5985.082857961" observedRunningTime="2026-01-26 19:23:37.855377672 +0000 UTC m=+5986.562513815" watchObservedRunningTime="2026-01-26 19:23:37.886491028 +0000 UTC m=+5986.593627161" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.903721 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79b7d7c77c-9sjzp" podStartSLOduration=3.224897251 podStartE2EDuration="9.903701207s" podCreationTimestamp="2026-01-26 19:23:28 +0000 UTC" firstStartedPulling="2026-01-26 19:23:29.69930684 +0000 UTC m=+5978.406442973" lastFinishedPulling="2026-01-26 19:23:36.378110796 +0000 UTC m=+5985.085246929" observedRunningTime="2026-01-26 19:23:37.892611548 +0000 UTC m=+5986.599747681" watchObservedRunningTime="2026-01-26 19:23:37.903701207 +0000 UTC m=+5986.610837340" Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.948606 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:23:37 crc kubenswrapper[4787]: I0126 19:23:37.970059 4787 scope.go:117] "RemoveContainer" containerID="0ba1c0f3ed40d9d48536b6d37513a39dd74bba05e8835b6ea81f4ee5c414848c" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.008020 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.061029 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.062972 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.076778 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.077364 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-scripts\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163333 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-config-data\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163515 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4976056-8e89-4116-a348-937ae6765893-logs\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163631 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr22q\" (UniqueName: \"kubernetes.io/projected/c4976056-8e89-4116-a348-937ae6765893-kube-api-access-wr22q\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163712 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4976056-8e89-4116-a348-937ae6765893-ceph\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.163810 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4976056-8e89-4116-a348-937ae6765893-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266553 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-scripts\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266609 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266649 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-config-data\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4976056-8e89-4116-a348-937ae6765893-logs\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266794 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr22q\" (UniqueName: \"kubernetes.io/projected/c4976056-8e89-4116-a348-937ae6765893-kube-api-access-wr22q\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266847 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4976056-8e89-4116-a348-937ae6765893-ceph\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.266920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4976056-8e89-4116-a348-937ae6765893-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.267382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4976056-8e89-4116-a348-937ae6765893-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.267482 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4976056-8e89-4116-a348-937ae6765893-logs\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.274855 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-scripts\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.276760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4976056-8e89-4116-a348-937ae6765893-ceph\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.288045 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-config-data\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.289822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4976056-8e89-4116-a348-937ae6765893-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.290793 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr22q\" (UniqueName: \"kubernetes.io/projected/c4976056-8e89-4116-a348-937ae6765893-kube-api-access-wr22q\") pod \"glance-default-external-api-0\" (UID: \"c4976056-8e89-4116-a348-937ae6765893\") " pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.402172 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.494182 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:23:38 crc kubenswrapper[4787]: W0126 19:23:38.512440 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b22e06a_773a_4fc9_891b_631713f4de49.slice/crio-98ba2971bdd94d620ae6b94c88c5f06ae863163719546c49b924a3ee2b02eaad WatchSource:0}: Error finding container 98ba2971bdd94d620ae6b94c88c5f06ae863163719546c49b924a3ee2b02eaad: Status 404 returned error can't find the container with id 98ba2971bdd94d620ae6b94c88c5f06ae863163719546c49b924a3ee2b02eaad Jan 26 19:23:38 crc kubenswrapper[4787]: I0126 19:23:38.853724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b22e06a-773a-4fc9-891b-631713f4de49","Type":"ContainerStarted","Data":"98ba2971bdd94d620ae6b94c88c5f06ae863163719546c49b924a3ee2b02eaad"} Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.086990 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.155346 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.431349 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.431422 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.635075 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf8e19f-5564-406a-82ac-27fe75ba40c0" path="/var/lib/kubelet/pods/0bf8e19f-5564-406a-82ac-27fe75ba40c0/volumes" Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.878901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4976056-8e89-4116-a348-937ae6765893","Type":"ContainerStarted","Data":"fda787180dabb0c4ae778c122bb6fa9235d6194323601ec5c73eb0586d6a4edf"} Jan 26 19:23:39 crc kubenswrapper[4787]: I0126 19:23:39.895152 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b22e06a-773a-4fc9-891b-631713f4de49","Type":"ContainerStarted","Data":"3e8d706056a309a2e7b9de4f0deeca3c681f26e6682e075564143ed37a657c30"} Jan 26 19:23:40 crc kubenswrapper[4787]: I0126 19:23:40.052584 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:40 crc kubenswrapper[4787]: I0126 19:23:40.052639 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:23:40 crc kubenswrapper[4787]: I0126 19:23:40.910546 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b22e06a-773a-4fc9-891b-631713f4de49","Type":"ContainerStarted","Data":"806916ab5d893157df31bceed978f4fda2dfd3d65065d47ea3cd04738cb898e8"} Jan 26 19:23:40 crc kubenswrapper[4787]: I0126 19:23:40.912577 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4976056-8e89-4116-a348-937ae6765893","Type":"ContainerStarted","Data":"79b590580e35c4555b68f1d5d192f5dfc6852b78cc54a19607c3d795e96b0485"} Jan 26 19:23:40 crc kubenswrapper[4787]: I0126 19:23:40.944080 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.9440562569999997 podStartE2EDuration="3.944056257s" podCreationTimestamp="2026-01-26 19:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:23:40.932739282 +0000 UTC m=+5989.639875415" watchObservedRunningTime="2026-01-26 19:23:40.944056257 +0000 UTC m=+5989.651192390" Jan 26 19:23:41 crc kubenswrapper[4787]: I0126 19:23:41.924031 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4976056-8e89-4116-a348-937ae6765893","Type":"ContainerStarted","Data":"bab18aca9e8485b1bba6af13bd4e6f4cec326906a3516cd8946e5ab354006e8d"} Jan 26 19:23:41 crc kubenswrapper[4787]: I0126 19:23:41.949165 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.949137986 podStartE2EDuration="4.949137986s" podCreationTimestamp="2026-01-26 19:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:23:41.941694295 +0000 UTC m=+5990.648830428" watchObservedRunningTime="2026-01-26 19:23:41.949137986 +0000 UTC m=+5990.656274120" Jan 26 19:23:43 crc kubenswrapper[4787]: I0126 19:23:43.050308 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-j5x68"] Jan 26 19:23:43 crc kubenswrapper[4787]: I0126 19:23:43.061162 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-j5x68"] Jan 26 19:23:43 crc kubenswrapper[4787]: I0126 19:23:43.604826 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833c4748-1557-4cd3-aead-b8928712b5ec" path="/var/lib/kubelet/pods/833c4748-1557-4cd3-aead-b8928712b5ec/volumes" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.702756 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tv86l"] Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.707511 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.732023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tv86l"] Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.828239 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ndk\" (UniqueName: \"kubernetes.io/projected/d8fbbb07-4763-4855-a7be-5a4bb5452b93-kube-api-access-k9ndk\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.828441 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-utilities\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.828514 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-catalog-content\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.931259 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-utilities\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.931351 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-catalog-content\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.931428 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ndk\" (UniqueName: \"kubernetes.io/projected/d8fbbb07-4763-4855-a7be-5a4bb5452b93-kube-api-access-k9ndk\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.932480 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-utilities\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.932775 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-catalog-content\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:44 crc kubenswrapper[4787]: I0126 19:23:44.956125 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ndk\" (UniqueName: \"kubernetes.io/projected/d8fbbb07-4763-4855-a7be-5a4bb5452b93-kube-api-access-k9ndk\") pod \"community-operators-tv86l\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:45 crc kubenswrapper[4787]: I0126 19:23:45.048469 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:45 crc kubenswrapper[4787]: I0126 19:23:45.638736 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tv86l"] Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.017746 4787 generic.go:334] "Generic (PLEG): container finished" podID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerID="4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f" exitCode=0 Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.017836 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv86l" event={"ID":"d8fbbb07-4763-4855-a7be-5a4bb5452b93","Type":"ContainerDied","Data":"4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f"} Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.018122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv86l" event={"ID":"d8fbbb07-4763-4855-a7be-5a4bb5452b93","Type":"ContainerStarted","Data":"958efa3f291e889fe4fbbd178bc8d43c522af096a33b7fb3ed0304f3a33d1aa9"} Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.807851 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.808357 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.808419 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.809222 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:23:46 crc kubenswrapper[4787]: I0126 19:23:46.809274 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" gracePeriod=600 Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.028932 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" exitCode=0 Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.029008 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3"} Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.029087 4787 scope.go:117] "RemoveContainer" containerID="7217ed496ed0b4c2a722c7ac3d16d8a1701ba2e7156a3c1353ad4ba7ede3fc2d" Jan 26 19:23:47 crc kubenswrapper[4787]: E0126 19:23:47.036055 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.794300 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.794687 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.828685 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:47 crc kubenswrapper[4787]: I0126 19:23:47.841343 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.049424 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:23:48 crc kubenswrapper[4787]: E0126 19:23:48.050011 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.052584 4787 generic.go:334] "Generic (PLEG): container finished" podID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerID="8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e" exitCode=0 Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.052686 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv86l" event={"ID":"d8fbbb07-4763-4855-a7be-5a4bb5452b93","Type":"ContainerDied","Data":"8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e"} Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.054355 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.054385 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.403208 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.403564 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.441489 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:23:48 crc kubenswrapper[4787]: I0126 19:23:48.468590 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:23:49 crc kubenswrapper[4787]: I0126 19:23:49.061093 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:23:49 crc kubenswrapper[4787]: I0126 19:23:49.061157 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:23:49 crc kubenswrapper[4787]: I0126 19:23:49.432201 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cc8dc8649-pzbh7" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.106:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8080: connect: connection refused" Jan 26 19:23:50 crc kubenswrapper[4787]: I0126 19:23:50.054082 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 26 19:23:50 crc kubenswrapper[4787]: I0126 19:23:50.936424 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:50 crc kubenswrapper[4787]: I0126 19:23:50.936862 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:23:50 crc kubenswrapper[4787]: I0126 19:23:50.992554 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:23:51 crc kubenswrapper[4787]: I0126 19:23:51.358589 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:23:51 crc kubenswrapper[4787]: I0126 19:23:51.358687 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:23:51 crc kubenswrapper[4787]: I0126 19:23:51.493214 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:23:52 crc kubenswrapper[4787]: I0126 19:23:52.088633 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv86l" event={"ID":"d8fbbb07-4763-4855-a7be-5a4bb5452b93","Type":"ContainerStarted","Data":"6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730"} Jan 26 19:23:55 crc kubenswrapper[4787]: I0126 19:23:55.050468 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:55 crc kubenswrapper[4787]: I0126 19:23:55.054861 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:23:56 crc kubenswrapper[4787]: I0126 19:23:56.096048 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv86l" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="registry-server" probeResult="failure" output=< Jan 26 19:23:56 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 19:23:56 crc kubenswrapper[4787]: > Jan 26 19:23:59 crc kubenswrapper[4787]: I0126 19:23:59.556651 4787 scope.go:117] "RemoveContainer" containerID="99cb4b7a120c21050834e78e03f50ce1306e49158c06aa1c011ee79177ae1346" Jan 26 19:23:59 crc kubenswrapper[4787]: I0126 19:23:59.625692 4787 scope.go:117] "RemoveContainer" containerID="57d8a76fcf146b78ef9a44196759c6f6fb6cb1daa3bf5d9d777639af5f66416a" Jan 26 19:23:59 crc kubenswrapper[4787]: I0126 19:23:59.653541 4787 scope.go:117] "RemoveContainer" containerID="153515b388104022ee1ba3aa94a7308b4278205c522c4a37e7a0ec74dc352feb" Jan 26 19:23:59 crc kubenswrapper[4787]: I0126 19:23:59.718331 4787 scope.go:117] "RemoveContainer" containerID="b86265110814e8c696c61160b2c3a0f97de67376505895f05bea4d9b3691c377" Jan 26 19:23:59 crc kubenswrapper[4787]: I0126 19:23:59.797162 4787 scope.go:117] "RemoveContainer" containerID="754b844e6149d8492303332ebc91243c8e6dcf1faff8523fceb632f392eb6d8d" Jan 26 19:23:59 crc kubenswrapper[4787]: I0126 19:23:59.817018 4787 scope.go:117] "RemoveContainer" containerID="4a25a4e20c9fdd28a118b787d674dad975b4ae96767d4fbdaafdb3cb72efc7e7" Jan 26 19:24:00 crc kubenswrapper[4787]: I0126 19:24:00.053322 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 26 19:24:00 crc kubenswrapper[4787]: I0126 19:24:00.589331 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:24:00 crc kubenswrapper[4787]: E0126 19:24:00.589676 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:24:01 crc kubenswrapper[4787]: I0126 19:24:01.553220 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:24:01 crc kubenswrapper[4787]: I0126 19:24:01.579123 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tv86l" podStartSLOduration=13.781140123 podStartE2EDuration="17.579101957s" podCreationTimestamp="2026-01-26 19:23:44 +0000 UTC" firstStartedPulling="2026-01-26 19:23:46.019513322 +0000 UTC m=+5994.726649455" lastFinishedPulling="2026-01-26 19:23:49.817475156 +0000 UTC m=+5998.524611289" observedRunningTime="2026-01-26 19:23:52.122323249 +0000 UTC m=+6000.829459382" watchObservedRunningTime="2026-01-26 19:24:01.579101957 +0000 UTC m=+6010.286238090" Jan 26 19:24:03 crc kubenswrapper[4787]: I0126 19:24:03.348191 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:24:05 crc kubenswrapper[4787]: I0126 19:24:05.097958 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:24:05 crc kubenswrapper[4787]: I0126 19:24:05.170592 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:24:05 crc kubenswrapper[4787]: I0126 19:24:05.341254 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tv86l"] Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.243204 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tv86l" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="registry-server" containerID="cri-o://6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730" gracePeriod=2 Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.730310 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.836847 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-catalog-content\") pod \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.837066 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ndk\" (UniqueName: \"kubernetes.io/projected/d8fbbb07-4763-4855-a7be-5a4bb5452b93-kube-api-access-k9ndk\") pod \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.837164 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-utilities\") pod \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\" (UID: \"d8fbbb07-4763-4855-a7be-5a4bb5452b93\") " Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.837608 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-utilities" (OuterVolumeSpecName: "utilities") pod "d8fbbb07-4763-4855-a7be-5a4bb5452b93" (UID: "d8fbbb07-4763-4855-a7be-5a4bb5452b93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.844994 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fbbb07-4763-4855-a7be-5a4bb5452b93-kube-api-access-k9ndk" (OuterVolumeSpecName: "kube-api-access-k9ndk") pod "d8fbbb07-4763-4855-a7be-5a4bb5452b93" (UID: "d8fbbb07-4763-4855-a7be-5a4bb5452b93"). InnerVolumeSpecName "kube-api-access-k9ndk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.895466 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8fbbb07-4763-4855-a7be-5a4bb5452b93" (UID: "d8fbbb07-4763-4855-a7be-5a4bb5452b93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.939546 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9ndk\" (UniqueName: \"kubernetes.io/projected/d8fbbb07-4763-4855-a7be-5a4bb5452b93-kube-api-access-k9ndk\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.939582 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:06 crc kubenswrapper[4787]: I0126 19:24:06.939594 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8fbbb07-4763-4855-a7be-5a4bb5452b93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.253741 4787 generic.go:334] "Generic (PLEG): container finished" podID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerID="6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730" exitCode=0 Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.253799 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tv86l" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.253817 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv86l" event={"ID":"d8fbbb07-4763-4855-a7be-5a4bb5452b93","Type":"ContainerDied","Data":"6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730"} Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.254145 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv86l" event={"ID":"d8fbbb07-4763-4855-a7be-5a4bb5452b93","Type":"ContainerDied","Data":"958efa3f291e889fe4fbbd178bc8d43c522af096a33b7fb3ed0304f3a33d1aa9"} Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.254171 4787 scope.go:117] "RemoveContainer" containerID="6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.279504 4787 scope.go:117] "RemoveContainer" containerID="8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.294047 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tv86l"] Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.302941 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tv86l"] Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.313654 4787 scope.go:117] "RemoveContainer" containerID="4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.340228 4787 scope.go:117] "RemoveContainer" containerID="6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730" Jan 26 19:24:07 crc kubenswrapper[4787]: E0126 19:24:07.340632 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730\": container with ID starting with 6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730 not found: ID does not exist" containerID="6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.340672 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730"} err="failed to get container status \"6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730\": rpc error: code = NotFound desc = could not find container \"6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730\": container with ID starting with 6a3e0f851418283c3658bb46cf1a6f7707610937a6eba29ca636b7f77e0f0730 not found: ID does not exist" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.340695 4787 scope.go:117] "RemoveContainer" containerID="8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e" Jan 26 19:24:07 crc kubenswrapper[4787]: E0126 19:24:07.340980 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e\": container with ID starting with 8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e not found: ID does not exist" containerID="8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.341008 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e"} err="failed to get container status \"8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e\": rpc error: code = NotFound desc = could not find container \"8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e\": container with ID starting with 8ba66ed167fcf4c46bfb66995f2f86f18a8499cb17d85fcd10d1531344ff544e not found: ID does not exist" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.341023 4787 scope.go:117] "RemoveContainer" containerID="4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f" Jan 26 19:24:07 crc kubenswrapper[4787]: E0126 19:24:07.341355 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f\": container with ID starting with 4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f not found: ID does not exist" containerID="4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.341386 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f"} err="failed to get container status \"4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f\": rpc error: code = NotFound desc = could not find container \"4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f\": container with ID starting with 4a53e990fd15e945972efcea70e8e29695b2337563aab6636b95c902e9bf4f4f not found: ID does not exist" Jan 26 19:24:07 crc kubenswrapper[4787]: I0126 19:24:07.600590 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" path="/var/lib/kubelet/pods/d8fbbb07-4763-4855-a7be-5a4bb5452b93/volumes" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.266329 4787 generic.go:334] "Generic (PLEG): container finished" podID="dcd03098-9c84-4642-86a7-4e6638732177" containerID="4c928bb5cbc15310761c595d8079895a01da4415c602ab64d2a4a209bb3ea367" exitCode=137 Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.266617 4787 generic.go:334] "Generic (PLEG): container finished" podID="dcd03098-9c84-4642-86a7-4e6638732177" containerID="71819a67a1596edcc2cdd280f262f389162bed2121b2c18d9b90562f52120650" exitCode=137 Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.266464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79b7d7c77c-9sjzp" event={"ID":"dcd03098-9c84-4642-86a7-4e6638732177","Type":"ContainerDied","Data":"4c928bb5cbc15310761c595d8079895a01da4415c602ab64d2a4a209bb3ea367"} Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.266681 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79b7d7c77c-9sjzp" event={"ID":"dcd03098-9c84-4642-86a7-4e6638732177","Type":"ContainerDied","Data":"71819a67a1596edcc2cdd280f262f389162bed2121b2c18d9b90562f52120650"} Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.402879 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.570637 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k49xk\" (UniqueName: \"kubernetes.io/projected/dcd03098-9c84-4642-86a7-4e6638732177-kube-api-access-k49xk\") pod \"dcd03098-9c84-4642-86a7-4e6638732177\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.570784 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dcd03098-9c84-4642-86a7-4e6638732177-horizon-secret-key\") pod \"dcd03098-9c84-4642-86a7-4e6638732177\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.570810 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd03098-9c84-4642-86a7-4e6638732177-logs\") pod \"dcd03098-9c84-4642-86a7-4e6638732177\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.570851 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-scripts\") pod \"dcd03098-9c84-4642-86a7-4e6638732177\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.571215 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd03098-9c84-4642-86a7-4e6638732177-logs" (OuterVolumeSpecName: "logs") pod "dcd03098-9c84-4642-86a7-4e6638732177" (UID: "dcd03098-9c84-4642-86a7-4e6638732177"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.571333 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-config-data\") pod \"dcd03098-9c84-4642-86a7-4e6638732177\" (UID: \"dcd03098-9c84-4642-86a7-4e6638732177\") " Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.571599 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd03098-9c84-4642-86a7-4e6638732177-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.575600 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd03098-9c84-4642-86a7-4e6638732177-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dcd03098-9c84-4642-86a7-4e6638732177" (UID: "dcd03098-9c84-4642-86a7-4e6638732177"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.582804 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd03098-9c84-4642-86a7-4e6638732177-kube-api-access-k49xk" (OuterVolumeSpecName: "kube-api-access-k49xk") pod "dcd03098-9c84-4642-86a7-4e6638732177" (UID: "dcd03098-9c84-4642-86a7-4e6638732177"). InnerVolumeSpecName "kube-api-access-k49xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.600669 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-scripts" (OuterVolumeSpecName: "scripts") pod "dcd03098-9c84-4642-86a7-4e6638732177" (UID: "dcd03098-9c84-4642-86a7-4e6638732177"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.602448 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-config-data" (OuterVolumeSpecName: "config-data") pod "dcd03098-9c84-4642-86a7-4e6638732177" (UID: "dcd03098-9c84-4642-86a7-4e6638732177"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.674985 4787 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dcd03098-9c84-4642-86a7-4e6638732177-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.675016 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.675029 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd03098-9c84-4642-86a7-4e6638732177-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:08 crc kubenswrapper[4787]: I0126 19:24:08.675042 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k49xk\" (UniqueName: \"kubernetes.io/projected/dcd03098-9c84-4642-86a7-4e6638732177-kube-api-access-k49xk\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.280915 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79b7d7c77c-9sjzp" event={"ID":"dcd03098-9c84-4642-86a7-4e6638732177","Type":"ContainerDied","Data":"00f7c8699e205af2b3ca887267ee98101baf753de81d4109f31051ce7a4a620a"} Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.280996 4787 scope.go:117] "RemoveContainer" containerID="4c928bb5cbc15310761c595d8079895a01da4415c602ab64d2a4a209bb3ea367" Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.281023 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79b7d7c77c-9sjzp" Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.342678 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79b7d7c77c-9sjzp"] Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.352707 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79b7d7c77c-9sjzp"] Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.477248 4787 scope.go:117] "RemoveContainer" containerID="71819a67a1596edcc2cdd280f262f389162bed2121b2c18d9b90562f52120650" Jan 26 19:24:09 crc kubenswrapper[4787]: I0126 19:24:09.609026 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd03098-9c84-4642-86a7-4e6638732177" path="/var/lib/kubelet/pods/dcd03098-9c84-4642-86a7-4e6638732177/volumes" Jan 26 19:24:12 crc kubenswrapper[4787]: I0126 19:24:12.087138 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:24:13 crc kubenswrapper[4787]: I0126 19:24:13.908761 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:24:13 crc kubenswrapper[4787]: I0126 19:24:13.977040 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc8dc8649-pzbh7"] Jan 26 19:24:13 crc kubenswrapper[4787]: I0126 19:24:13.977379 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc8dc8649-pzbh7" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon-log" containerID="cri-o://1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e" gracePeriod=30 Jan 26 19:24:13 crc kubenswrapper[4787]: I0126 19:24:13.977396 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc8dc8649-pzbh7" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" containerID="cri-o://c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3" gracePeriod=30 Jan 26 19:24:14 crc kubenswrapper[4787]: I0126 19:24:14.589443 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:24:14 crc kubenswrapper[4787]: E0126 19:24:14.589968 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:24:17 crc kubenswrapper[4787]: I0126 19:24:17.372466 4787 generic.go:334] "Generic (PLEG): container finished" podID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerID="c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3" exitCode=0 Jan 26 19:24:17 crc kubenswrapper[4787]: I0126 19:24:17.372616 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc8dc8649-pzbh7" event={"ID":"921ee18e-4ea9-4270-9f29-515a05ca5eff","Type":"ContainerDied","Data":"c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3"} Jan 26 19:24:19 crc kubenswrapper[4787]: I0126 19:24:19.431904 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc8dc8649-pzbh7" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.106:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8080: connect: connection refused" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.332367 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b5b7ff8f-5f4tv"] Jan 26 19:24:21 crc kubenswrapper[4787]: E0126 19:24:21.334198 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="extract-content" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.334299 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="extract-content" Jan 26 19:24:21 crc kubenswrapper[4787]: E0126 19:24:21.334395 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon-log" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.334471 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon-log" Jan 26 19:24:21 crc kubenswrapper[4787]: E0126 19:24:21.334571 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.334654 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon" Jan 26 19:24:21 crc kubenswrapper[4787]: E0126 19:24:21.334743 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="extract-utilities" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.334829 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="extract-utilities" Jan 26 19:24:21 crc kubenswrapper[4787]: E0126 19:24:21.334934 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="registry-server" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.335043 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="registry-server" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.335454 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fbbb07-4763-4855-a7be-5a4bb5452b93" containerName="registry-server" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.335568 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon-log" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.335669 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd03098-9c84-4642-86a7-4e6638732177" containerName="horizon" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.337104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.360895 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b5b7ff8f-5f4tv"] Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.446564 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1fed33f-a577-4398-bd7f-dc9231312768-logs\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.446641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwz6r\" (UniqueName: \"kubernetes.io/projected/f1fed33f-a577-4398-bd7f-dc9231312768-kube-api-access-hwz6r\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.446688 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1fed33f-a577-4398-bd7f-dc9231312768-config-data\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.446747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1fed33f-a577-4398-bd7f-dc9231312768-horizon-secret-key\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.446855 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1fed33f-a577-4398-bd7f-dc9231312768-scripts\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.548865 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1fed33f-a577-4398-bd7f-dc9231312768-scripts\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.549071 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1fed33f-a577-4398-bd7f-dc9231312768-logs\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.549114 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwz6r\" (UniqueName: \"kubernetes.io/projected/f1fed33f-a577-4398-bd7f-dc9231312768-kube-api-access-hwz6r\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.549180 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1fed33f-a577-4398-bd7f-dc9231312768-config-data\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.549586 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1fed33f-a577-4398-bd7f-dc9231312768-horizon-secret-key\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.549639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1fed33f-a577-4398-bd7f-dc9231312768-logs\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.550093 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1fed33f-a577-4398-bd7f-dc9231312768-scripts\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.551144 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1fed33f-a577-4398-bd7f-dc9231312768-config-data\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.555608 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1fed33f-a577-4398-bd7f-dc9231312768-horizon-secret-key\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.573867 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwz6r\" (UniqueName: \"kubernetes.io/projected/f1fed33f-a577-4398-bd7f-dc9231312768-kube-api-access-hwz6r\") pod \"horizon-b5b7ff8f-5f4tv\" (UID: \"f1fed33f-a577-4398-bd7f-dc9231312768\") " pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:21 crc kubenswrapper[4787]: I0126 19:24:21.657689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.225164 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b5b7ff8f-5f4tv"] Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.417569 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5b7ff8f-5f4tv" event={"ID":"f1fed33f-a577-4398-bd7f-dc9231312768","Type":"ContainerStarted","Data":"744d70d9036f20bf2f20b7a7bb373e28a89c524e49b4e86cbc63265612cea633"} Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.550099 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-6lpbc"] Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.551552 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.590729 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6lpbc"] Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.674780 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8227-account-create-update-5tdpx"] Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.676894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v62d\" (UniqueName: \"kubernetes.io/projected/973b6e22-3478-4638-9023-474565f3a8fb-kube-api-access-5v62d\") pod \"heat-db-create-6lpbc\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.677299 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973b6e22-3478-4638-9023-474565f3a8fb-operator-scripts\") pod \"heat-db-create-6lpbc\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.680567 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.686314 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.695907 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8227-account-create-update-5tdpx"] Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.778885 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v62d\" (UniqueName: \"kubernetes.io/projected/973b6e22-3478-4638-9023-474565f3a8fb-kube-api-access-5v62d\") pod \"heat-db-create-6lpbc\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.779375 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gkhh\" (UniqueName: \"kubernetes.io/projected/f3dae570-08ed-49e2-a450-42d8b512a701-kube-api-access-7gkhh\") pod \"heat-8227-account-create-update-5tdpx\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.779523 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dae570-08ed-49e2-a450-42d8b512a701-operator-scripts\") pod \"heat-8227-account-create-update-5tdpx\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.779600 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973b6e22-3478-4638-9023-474565f3a8fb-operator-scripts\") pod \"heat-db-create-6lpbc\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.780889 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973b6e22-3478-4638-9023-474565f3a8fb-operator-scripts\") pod \"heat-db-create-6lpbc\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.797852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v62d\" (UniqueName: \"kubernetes.io/projected/973b6e22-3478-4638-9023-474565f3a8fb-kube-api-access-5v62d\") pod \"heat-db-create-6lpbc\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.881370 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dae570-08ed-49e2-a450-42d8b512a701-operator-scripts\") pod \"heat-8227-account-create-update-5tdpx\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.881560 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gkhh\" (UniqueName: \"kubernetes.io/projected/f3dae570-08ed-49e2-a450-42d8b512a701-kube-api-access-7gkhh\") pod \"heat-8227-account-create-update-5tdpx\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.883137 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dae570-08ed-49e2-a450-42d8b512a701-operator-scripts\") pod \"heat-8227-account-create-update-5tdpx\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.904906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gkhh\" (UniqueName: \"kubernetes.io/projected/f3dae570-08ed-49e2-a450-42d8b512a701-kube-api-access-7gkhh\") pod \"heat-8227-account-create-update-5tdpx\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:22 crc kubenswrapper[4787]: I0126 19:24:22.939184 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:23 crc kubenswrapper[4787]: I0126 19:24:22.999906 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:23 crc kubenswrapper[4787]: I0126 19:24:23.429694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5b7ff8f-5f4tv" event={"ID":"f1fed33f-a577-4398-bd7f-dc9231312768","Type":"ContainerStarted","Data":"c420344b803bfd5267af9cd819d4150dd25119605fc11820a6207e51eb4bf82d"} Jan 26 19:24:23 crc kubenswrapper[4787]: I0126 19:24:23.429752 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5b7ff8f-5f4tv" event={"ID":"f1fed33f-a577-4398-bd7f-dc9231312768","Type":"ContainerStarted","Data":"2aa983616330b3f4d49332829b79ca98e52a1324f147a91b29880ddf06ef0e0c"} Jan 26 19:24:23 crc kubenswrapper[4787]: I0126 19:24:23.461199 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b5b7ff8f-5f4tv" podStartSLOduration=2.4611776069999998 podStartE2EDuration="2.461177607s" podCreationTimestamp="2026-01-26 19:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:24:23.450655452 +0000 UTC m=+6032.157791585" watchObservedRunningTime="2026-01-26 19:24:23.461177607 +0000 UTC m=+6032.168313740" Jan 26 19:24:23 crc kubenswrapper[4787]: I0126 19:24:23.479797 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6lpbc"] Jan 26 19:24:23 crc kubenswrapper[4787]: W0126 19:24:23.479814 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973b6e22_3478_4638_9023_474565f3a8fb.slice/crio-520008581f469446cb5255b1d3c8498a8f2cc1986c8a9044ef0e41579fd09933 WatchSource:0}: Error finding container 520008581f469446cb5255b1d3c8498a8f2cc1986c8a9044ef0e41579fd09933: Status 404 returned error can't find the container with id 520008581f469446cb5255b1d3c8498a8f2cc1986c8a9044ef0e41579fd09933 Jan 26 19:24:23 crc kubenswrapper[4787]: I0126 19:24:23.603129 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8227-account-create-update-5tdpx"] Jan 26 19:24:24 crc kubenswrapper[4787]: I0126 19:24:24.442366 4787 generic.go:334] "Generic (PLEG): container finished" podID="f3dae570-08ed-49e2-a450-42d8b512a701" containerID="50aefd6818856d2225e5d6f1f1e563b0fe28550e83c66f8383a55461bbc5f238" exitCode=0 Jan 26 19:24:24 crc kubenswrapper[4787]: I0126 19:24:24.442427 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8227-account-create-update-5tdpx" event={"ID":"f3dae570-08ed-49e2-a450-42d8b512a701","Type":"ContainerDied","Data":"50aefd6818856d2225e5d6f1f1e563b0fe28550e83c66f8383a55461bbc5f238"} Jan 26 19:24:24 crc kubenswrapper[4787]: I0126 19:24:24.442999 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8227-account-create-update-5tdpx" event={"ID":"f3dae570-08ed-49e2-a450-42d8b512a701","Type":"ContainerStarted","Data":"78d6b7b1a8a6e2d28e92f4c22aa4a0f02688e19e711d68ef84ac48edf35342c7"} Jan 26 19:24:24 crc kubenswrapper[4787]: I0126 19:24:24.444690 4787 generic.go:334] "Generic (PLEG): container finished" podID="973b6e22-3478-4638-9023-474565f3a8fb" containerID="134f564ff61c7e7e7600bf0d35c54b5edecfe5f5ed1664b4109f3b0e2c013db9" exitCode=0 Jan 26 19:24:24 crc kubenswrapper[4787]: I0126 19:24:24.445125 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6lpbc" event={"ID":"973b6e22-3478-4638-9023-474565f3a8fb","Type":"ContainerDied","Data":"134f564ff61c7e7e7600bf0d35c54b5edecfe5f5ed1664b4109f3b0e2c013db9"} Jan 26 19:24:24 crc kubenswrapper[4787]: I0126 19:24:24.445180 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6lpbc" event={"ID":"973b6e22-3478-4638-9023-474565f3a8fb","Type":"ContainerStarted","Data":"520008581f469446cb5255b1d3c8498a8f2cc1986c8a9044ef0e41579fd09933"} Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.045899 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2d7ds"] Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.059183 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0a26-account-create-update-ngdcz"] Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.071376 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2d7ds"] Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.083602 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0a26-account-create-update-ngdcz"] Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.602710 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98757f95-4a1d-4854-a2ce-39c97784b153" path="/var/lib/kubelet/pods/98757f95-4a1d-4854-a2ce-39c97784b153/volumes" Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.604374 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85a6954-2c85-4237-a63e-2d2391fdc1dc" path="/var/lib/kubelet/pods/d85a6954-2c85-4237-a63e-2d2391fdc1dc/volumes" Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.884034 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:25 crc kubenswrapper[4787]: I0126 19:24:25.892662 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.062104 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973b6e22-3478-4638-9023-474565f3a8fb-operator-scripts\") pod \"973b6e22-3478-4638-9023-474565f3a8fb\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.062490 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gkhh\" (UniqueName: \"kubernetes.io/projected/f3dae570-08ed-49e2-a450-42d8b512a701-kube-api-access-7gkhh\") pod \"f3dae570-08ed-49e2-a450-42d8b512a701\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.062512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dae570-08ed-49e2-a450-42d8b512a701-operator-scripts\") pod \"f3dae570-08ed-49e2-a450-42d8b512a701\" (UID: \"f3dae570-08ed-49e2-a450-42d8b512a701\") " Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.062616 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v62d\" (UniqueName: \"kubernetes.io/projected/973b6e22-3478-4638-9023-474565f3a8fb-kube-api-access-5v62d\") pod \"973b6e22-3478-4638-9023-474565f3a8fb\" (UID: \"973b6e22-3478-4638-9023-474565f3a8fb\") " Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.062646 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973b6e22-3478-4638-9023-474565f3a8fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "973b6e22-3478-4638-9023-474565f3a8fb" (UID: "973b6e22-3478-4638-9023-474565f3a8fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.063076 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973b6e22-3478-4638-9023-474565f3a8fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.063165 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dae570-08ed-49e2-a450-42d8b512a701-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3dae570-08ed-49e2-a450-42d8b512a701" (UID: "f3dae570-08ed-49e2-a450-42d8b512a701"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.069039 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3dae570-08ed-49e2-a450-42d8b512a701-kube-api-access-7gkhh" (OuterVolumeSpecName: "kube-api-access-7gkhh") pod "f3dae570-08ed-49e2-a450-42d8b512a701" (UID: "f3dae570-08ed-49e2-a450-42d8b512a701"). InnerVolumeSpecName "kube-api-access-7gkhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.071809 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973b6e22-3478-4638-9023-474565f3a8fb-kube-api-access-5v62d" (OuterVolumeSpecName: "kube-api-access-5v62d") pod "973b6e22-3478-4638-9023-474565f3a8fb" (UID: "973b6e22-3478-4638-9023-474565f3a8fb"). InnerVolumeSpecName "kube-api-access-5v62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.165263 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gkhh\" (UniqueName: \"kubernetes.io/projected/f3dae570-08ed-49e2-a450-42d8b512a701-kube-api-access-7gkhh\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.165300 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dae570-08ed-49e2-a450-42d8b512a701-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.165311 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v62d\" (UniqueName: \"kubernetes.io/projected/973b6e22-3478-4638-9023-474565f3a8fb-kube-api-access-5v62d\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.465061 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8227-account-create-update-5tdpx" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.468006 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8227-account-create-update-5tdpx" event={"ID":"f3dae570-08ed-49e2-a450-42d8b512a701","Type":"ContainerDied","Data":"78d6b7b1a8a6e2d28e92f4c22aa4a0f02688e19e711d68ef84ac48edf35342c7"} Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.468047 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d6b7b1a8a6e2d28e92f4c22aa4a0f02688e19e711d68ef84ac48edf35342c7" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.469978 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6lpbc" event={"ID":"973b6e22-3478-4638-9023-474565f3a8fb","Type":"ContainerDied","Data":"520008581f469446cb5255b1d3c8498a8f2cc1986c8a9044ef0e41579fd09933"} Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.470030 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520008581f469446cb5255b1d3c8498a8f2cc1986c8a9044ef0e41579fd09933" Jan 26 19:24:26 crc kubenswrapper[4787]: I0126 19:24:26.470103 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6lpbc" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.846986 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-cxp2v"] Jan 26 19:24:27 crc kubenswrapper[4787]: E0126 19:24:27.847784 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973b6e22-3478-4638-9023-474565f3a8fb" containerName="mariadb-database-create" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.847802 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="973b6e22-3478-4638-9023-474565f3a8fb" containerName="mariadb-database-create" Jan 26 19:24:27 crc kubenswrapper[4787]: E0126 19:24:27.847860 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dae570-08ed-49e2-a450-42d8b512a701" containerName="mariadb-account-create-update" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.847869 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dae570-08ed-49e2-a450-42d8b512a701" containerName="mariadb-account-create-update" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.848243 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="973b6e22-3478-4638-9023-474565f3a8fb" containerName="mariadb-database-create" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.848264 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3dae570-08ed-49e2-a450-42d8b512a701" containerName="mariadb-account-create-update" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.849157 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.852428 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.859390 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-cxp2v"] Jan 26 19:24:27 crc kubenswrapper[4787]: I0126 19:24:27.859871 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nnjp4" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.003898 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-combined-ca-bundle\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.004133 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzs55\" (UniqueName: \"kubernetes.io/projected/3448f12c-52a4-47a3-8247-084746b6c32f-kube-api-access-fzs55\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.004301 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-config-data\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.106015 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-combined-ca-bundle\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.106124 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzs55\" (UniqueName: \"kubernetes.io/projected/3448f12c-52a4-47a3-8247-084746b6c32f-kube-api-access-fzs55\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.106181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-config-data\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.111915 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-combined-ca-bundle\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.112035 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-config-data\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.123772 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzs55\" (UniqueName: \"kubernetes.io/projected/3448f12c-52a4-47a3-8247-084746b6c32f-kube-api-access-fzs55\") pod \"heat-db-sync-cxp2v\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.167837 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.589873 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:24:28 crc kubenswrapper[4787]: E0126 19:24:28.590416 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:24:28 crc kubenswrapper[4787]: I0126 19:24:28.756238 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-cxp2v"] Jan 26 19:24:29 crc kubenswrapper[4787]: I0126 19:24:29.431186 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc8dc8649-pzbh7" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.106:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8080: connect: connection refused" Jan 26 19:24:29 crc kubenswrapper[4787]: I0126 19:24:29.522080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cxp2v" event={"ID":"3448f12c-52a4-47a3-8247-084746b6c32f","Type":"ContainerStarted","Data":"25cfaee35bd4aff65badfc37c60e5c16a14b3d314643d60dd7a50354f88c80f7"} Jan 26 19:24:31 crc kubenswrapper[4787]: I0126 19:24:31.658719 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:31 crc kubenswrapper[4787]: I0126 19:24:31.659131 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:34 crc kubenswrapper[4787]: I0126 19:24:34.057367 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-vgn54"] Jan 26 19:24:34 crc kubenswrapper[4787]: I0126 19:24:34.088227 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-vgn54"] Jan 26 19:24:35 crc kubenswrapper[4787]: I0126 19:24:35.607047 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa8ab24-bc18-4568-ac88-2a3d3682d0f5" path="/var/lib/kubelet/pods/faa8ab24-bc18-4568-ac88-2a3d3682d0f5/volumes" Jan 26 19:24:37 crc kubenswrapper[4787]: I0126 19:24:37.622036 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cxp2v" event={"ID":"3448f12c-52a4-47a3-8247-084746b6c32f","Type":"ContainerStarted","Data":"1b722a02ad4b662a99923569cb7fa32c8cf9096aad7ee1cf0b31cf21dd266139"} Jan 26 19:24:37 crc kubenswrapper[4787]: I0126 19:24:37.654059 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-cxp2v" podStartSLOduration=2.98141222 podStartE2EDuration="10.654033702s" podCreationTimestamp="2026-01-26 19:24:27 +0000 UTC" firstStartedPulling="2026-01-26 19:24:28.763758693 +0000 UTC m=+6037.470894816" lastFinishedPulling="2026-01-26 19:24:36.436380165 +0000 UTC m=+6045.143516298" observedRunningTime="2026-01-26 19:24:37.637106521 +0000 UTC m=+6046.344242674" watchObservedRunningTime="2026-01-26 19:24:37.654033702 +0000 UTC m=+6046.361169845" Jan 26 19:24:39 crc kubenswrapper[4787]: I0126 19:24:39.431798 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc8dc8649-pzbh7" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.106:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8080: connect: connection refused" Jan 26 19:24:39 crc kubenswrapper[4787]: I0126 19:24:39.432256 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:24:39 crc kubenswrapper[4787]: I0126 19:24:39.643219 4787 generic.go:334] "Generic (PLEG): container finished" podID="3448f12c-52a4-47a3-8247-084746b6c32f" containerID="1b722a02ad4b662a99923569cb7fa32c8cf9096aad7ee1cf0b31cf21dd266139" exitCode=0 Jan 26 19:24:39 crc kubenswrapper[4787]: I0126 19:24:39.643517 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cxp2v" event={"ID":"3448f12c-52a4-47a3-8247-084746b6c32f","Type":"ContainerDied","Data":"1b722a02ad4b662a99923569cb7fa32c8cf9096aad7ee1cf0b31cf21dd266139"} Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.093019 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.218752 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-combined-ca-bundle\") pod \"3448f12c-52a4-47a3-8247-084746b6c32f\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.218801 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzs55\" (UniqueName: \"kubernetes.io/projected/3448f12c-52a4-47a3-8247-084746b6c32f-kube-api-access-fzs55\") pod \"3448f12c-52a4-47a3-8247-084746b6c32f\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.218838 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-config-data\") pod \"3448f12c-52a4-47a3-8247-084746b6c32f\" (UID: \"3448f12c-52a4-47a3-8247-084746b6c32f\") " Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.223935 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3448f12c-52a4-47a3-8247-084746b6c32f-kube-api-access-fzs55" (OuterVolumeSpecName: "kube-api-access-fzs55") pod "3448f12c-52a4-47a3-8247-084746b6c32f" (UID: "3448f12c-52a4-47a3-8247-084746b6c32f"). InnerVolumeSpecName "kube-api-access-fzs55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.254167 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3448f12c-52a4-47a3-8247-084746b6c32f" (UID: "3448f12c-52a4-47a3-8247-084746b6c32f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.294796 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-config-data" (OuterVolumeSpecName: "config-data") pod "3448f12c-52a4-47a3-8247-084746b6c32f" (UID: "3448f12c-52a4-47a3-8247-084746b6c32f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.322009 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.322045 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzs55\" (UniqueName: \"kubernetes.io/projected/3448f12c-52a4-47a3-8247-084746b6c32f-kube-api-access-fzs55\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.322059 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3448f12c-52a4-47a3-8247-084746b6c32f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.660360 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5b7ff8f-5f4tv" podUID="f1fed33f-a577-4398-bd7f-dc9231312768" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.663917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cxp2v" event={"ID":"3448f12c-52a4-47a3-8247-084746b6c32f","Type":"ContainerDied","Data":"25cfaee35bd4aff65badfc37c60e5c16a14b3d314643d60dd7a50354f88c80f7"} Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.663971 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25cfaee35bd4aff65badfc37c60e5c16a14b3d314643d60dd7a50354f88c80f7" Jan 26 19:24:41 crc kubenswrapper[4787]: I0126 19:24:41.664025 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cxp2v" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.779366 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5765f6b7db-rc55s"] Jan 26 19:24:42 crc kubenswrapper[4787]: E0126 19:24:42.780266 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3448f12c-52a4-47a3-8247-084746b6c32f" containerName="heat-db-sync" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.780287 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3448f12c-52a4-47a3-8247-084746b6c32f" containerName="heat-db-sync" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.780516 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3448f12c-52a4-47a3-8247-084746b6c32f" containerName="heat-db-sync" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.781494 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.788922 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.789244 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.789550 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nnjp4" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.808877 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5765f6b7db-rc55s"] Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.856937 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-combined-ca-bundle\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.857038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-config-data\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.857108 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5g9l\" (UniqueName: \"kubernetes.io/projected/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-kube-api-access-v5g9l\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.857223 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-config-data-custom\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.920355 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7f84c7746-b2flw"] Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.927879 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.937491 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.944477 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7f84c7746-b2flw"] Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.959200 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-config-data\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.959292 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5g9l\" (UniqueName: \"kubernetes.io/projected/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-kube-api-access-v5g9l\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.959355 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-config-data-custom\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.959427 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-combined-ca-bundle\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.966763 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-config-data\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.968147 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-combined-ca-bundle\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:42 crc kubenswrapper[4787]: I0126 19:24:42.983879 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-config-data-custom\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.008860 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5g9l\" (UniqueName: \"kubernetes.io/projected/0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4-kube-api-access-v5g9l\") pod \"heat-engine-5765f6b7db-rc55s\" (UID: \"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4\") " pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.032638 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5b98f465d7-qqj9m"] Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.033988 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.037149 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.046345 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b98f465d7-qqj9m"] Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.060565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-combined-ca-bundle\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.060630 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-config-data-custom\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.060709 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-config-data\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.060732 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbvhj\" (UniqueName: \"kubernetes.io/projected/6cfd1ead-fea5-4f01-b7a8-944f010495ad-kube-api-access-wbvhj\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.120850 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-config-data\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tlkl\" (UniqueName: \"kubernetes.io/projected/5dc2cf8e-097c-4099-a370-39e5a69eb862-kube-api-access-5tlkl\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-combined-ca-bundle\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162553 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-combined-ca-bundle\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162585 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-config-data-custom\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162674 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-config-data-custom\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162709 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-config-data\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.162739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbvhj\" (UniqueName: \"kubernetes.io/projected/6cfd1ead-fea5-4f01-b7a8-944f010495ad-kube-api-access-wbvhj\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.178414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-combined-ca-bundle\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.181389 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-config-data-custom\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.183243 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfd1ead-fea5-4f01-b7a8-944f010495ad-config-data\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.208195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbvhj\" (UniqueName: \"kubernetes.io/projected/6cfd1ead-fea5-4f01-b7a8-944f010495ad-kube-api-access-wbvhj\") pod \"heat-api-7f84c7746-b2flw\" (UID: \"6cfd1ead-fea5-4f01-b7a8-944f010495ad\") " pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.261442 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.265809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-config-data\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.265896 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tlkl\" (UniqueName: \"kubernetes.io/projected/5dc2cf8e-097c-4099-a370-39e5a69eb862-kube-api-access-5tlkl\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.266075 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-combined-ca-bundle\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.266323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-config-data-custom\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.273121 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-combined-ca-bundle\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.286033 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-config-data-custom\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.303972 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tlkl\" (UniqueName: \"kubernetes.io/projected/5dc2cf8e-097c-4099-a370-39e5a69eb862-kube-api-access-5tlkl\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.308123 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc2cf8e-097c-4099-a370-39e5a69eb862-config-data\") pod \"heat-cfnapi-5b98f465d7-qqj9m\" (UID: \"5dc2cf8e-097c-4099-a370-39e5a69eb862\") " pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.378571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.589231 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:24:43 crc kubenswrapper[4787]: E0126 19:24:43.589766 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:24:43 crc kubenswrapper[4787]: I0126 19:24:43.800008 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5765f6b7db-rc55s"] Jan 26 19:24:43 crc kubenswrapper[4787]: W0126 19:24:43.844242 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f7a71ea_1ae4_4f9b_b9dc_87c1671738a4.slice/crio-a65888d7a7a92dcb28bdd172100b90b2ecbb5d1aae7c1bf842c3ba43e4ef2e35 WatchSource:0}: Error finding container a65888d7a7a92dcb28bdd172100b90b2ecbb5d1aae7c1bf842c3ba43e4ef2e35: Status 404 returned error can't find the container with id a65888d7a7a92dcb28bdd172100b90b2ecbb5d1aae7c1bf842c3ba43e4ef2e35 Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.060912 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7f84c7746-b2flw"] Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.061278 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b98f465d7-qqj9m"] Jan 26 19:24:44 crc kubenswrapper[4787]: W0126 19:24:44.068286 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cfd1ead_fea5_4f01_b7a8_944f010495ad.slice/crio-e61e3affdc2ba062534d4d643d1c81fb6d9f6c591bed557479baa9ea36bd5855 WatchSource:0}: Error finding container e61e3affdc2ba062534d4d643d1c81fb6d9f6c591bed557479baa9ea36bd5855: Status 404 returned error can't find the container with id e61e3affdc2ba062534d4d643d1c81fb6d9f6c591bed557479baa9ea36bd5855 Jan 26 19:24:44 crc kubenswrapper[4787]: W0126 19:24:44.071759 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dc2cf8e_097c_4099_a370_39e5a69eb862.slice/crio-7b5df9e645e0fe5c9c52d19c3625a502e2b7dbcf0a76f625c51ffc9a93218fb5 WatchSource:0}: Error finding container 7b5df9e645e0fe5c9c52d19c3625a502e2b7dbcf0a76f625c51ffc9a93218fb5: Status 404 returned error can't find the container with id 7b5df9e645e0fe5c9c52d19c3625a502e2b7dbcf0a76f625c51ffc9a93218fb5 Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.483708 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.607543 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-config-data\") pod \"921ee18e-4ea9-4270-9f29-515a05ca5eff\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.607611 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/921ee18e-4ea9-4270-9f29-515a05ca5eff-horizon-secret-key\") pod \"921ee18e-4ea9-4270-9f29-515a05ca5eff\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.607683 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921ee18e-4ea9-4270-9f29-515a05ca5eff-logs\") pod \"921ee18e-4ea9-4270-9f29-515a05ca5eff\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.607771 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9h76\" (UniqueName: \"kubernetes.io/projected/921ee18e-4ea9-4270-9f29-515a05ca5eff-kube-api-access-n9h76\") pod \"921ee18e-4ea9-4270-9f29-515a05ca5eff\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.607888 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-scripts\") pod \"921ee18e-4ea9-4270-9f29-515a05ca5eff\" (UID: \"921ee18e-4ea9-4270-9f29-515a05ca5eff\") " Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.608852 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921ee18e-4ea9-4270-9f29-515a05ca5eff-logs" (OuterVolumeSpecName: "logs") pod "921ee18e-4ea9-4270-9f29-515a05ca5eff" (UID: "921ee18e-4ea9-4270-9f29-515a05ca5eff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.615095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921ee18e-4ea9-4270-9f29-515a05ca5eff-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "921ee18e-4ea9-4270-9f29-515a05ca5eff" (UID: "921ee18e-4ea9-4270-9f29-515a05ca5eff"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.623843 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921ee18e-4ea9-4270-9f29-515a05ca5eff-kube-api-access-n9h76" (OuterVolumeSpecName: "kube-api-access-n9h76") pod "921ee18e-4ea9-4270-9f29-515a05ca5eff" (UID: "921ee18e-4ea9-4270-9f29-515a05ca5eff"). InnerVolumeSpecName "kube-api-access-n9h76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.639401 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-scripts" (OuterVolumeSpecName: "scripts") pod "921ee18e-4ea9-4270-9f29-515a05ca5eff" (UID: "921ee18e-4ea9-4270-9f29-515a05ca5eff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.643144 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-config-data" (OuterVolumeSpecName: "config-data") pod "921ee18e-4ea9-4270-9f29-515a05ca5eff" (UID: "921ee18e-4ea9-4270-9f29-515a05ca5eff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.709691 4787 generic.go:334] "Generic (PLEG): container finished" podID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerID="1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e" exitCode=137 Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.709763 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc8dc8649-pzbh7" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.709783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc8dc8649-pzbh7" event={"ID":"921ee18e-4ea9-4270-9f29-515a05ca5eff","Type":"ContainerDied","Data":"1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e"} Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.710495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc8dc8649-pzbh7" event={"ID":"921ee18e-4ea9-4270-9f29-515a05ca5eff","Type":"ContainerDied","Data":"a54e8edf83fa0651a627840bfb6d16484f9a2490cc0a862a1f8d4091fde323ac"} Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.710536 4787 scope.go:117] "RemoveContainer" containerID="c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.711739 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.712436 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/921ee18e-4ea9-4270-9f29-515a05ca5eff-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.712462 4787 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/921ee18e-4ea9-4270-9f29-515a05ca5eff-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.712478 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921ee18e-4ea9-4270-9f29-515a05ca5eff-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.712509 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9h76\" (UniqueName: \"kubernetes.io/projected/921ee18e-4ea9-4270-9f29-515a05ca5eff-kube-api-access-n9h76\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.726688 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" event={"ID":"5dc2cf8e-097c-4099-a370-39e5a69eb862","Type":"ContainerStarted","Data":"7b5df9e645e0fe5c9c52d19c3625a502e2b7dbcf0a76f625c51ffc9a93218fb5"} Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.728902 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7f84c7746-b2flw" event={"ID":"6cfd1ead-fea5-4f01-b7a8-944f010495ad","Type":"ContainerStarted","Data":"e61e3affdc2ba062534d4d643d1c81fb6d9f6c591bed557479baa9ea36bd5855"} Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.734579 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5765f6b7db-rc55s" event={"ID":"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4","Type":"ContainerStarted","Data":"1a259fa368f1fe5b719628ed0c2e40123f28945e22a034e1d11fbcd567e3550d"} Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.734645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5765f6b7db-rc55s" event={"ID":"0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4","Type":"ContainerStarted","Data":"a65888d7a7a92dcb28bdd172100b90b2ecbb5d1aae7c1bf842c3ba43e4ef2e35"} Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.734931 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.758236 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc8dc8649-pzbh7"] Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.771015 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cc8dc8649-pzbh7"] Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.774351 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5765f6b7db-rc55s" podStartSLOduration=2.774323639 podStartE2EDuration="2.774323639s" podCreationTimestamp="2026-01-26 19:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:24:44.767461813 +0000 UTC m=+6053.474597966" watchObservedRunningTime="2026-01-26 19:24:44.774323639 +0000 UTC m=+6053.481459772" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.936805 4787 scope.go:117] "RemoveContainer" containerID="1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.964547 4787 scope.go:117] "RemoveContainer" containerID="c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3" Jan 26 19:24:44 crc kubenswrapper[4787]: E0126 19:24:44.965061 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3\": container with ID starting with c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3 not found: ID does not exist" containerID="c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.965106 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3"} err="failed to get container status \"c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3\": rpc error: code = NotFound desc = could not find container \"c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3\": container with ID starting with c8aced9b16db60cea89e503f36b86c88f2be740e4761f1c82a6259ab6c32c4f3 not found: ID does not exist" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.965203 4787 scope.go:117] "RemoveContainer" containerID="1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e" Jan 26 19:24:44 crc kubenswrapper[4787]: E0126 19:24:44.965575 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e\": container with ID starting with 1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e not found: ID does not exist" containerID="1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e" Jan 26 19:24:44 crc kubenswrapper[4787]: I0126 19:24:44.965598 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e"} err="failed to get container status \"1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e\": rpc error: code = NotFound desc = could not find container \"1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e\": container with ID starting with 1f17036a1a0085198352be6995b3c1f18611b6cf0b9385294df4535f4aa3c47e not found: ID does not exist" Jan 26 19:24:45 crc kubenswrapper[4787]: I0126 19:24:45.609786 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" path="/var/lib/kubelet/pods/921ee18e-4ea9-4270-9f29-515a05ca5eff/volumes" Jan 26 19:24:46 crc kubenswrapper[4787]: I0126 19:24:46.758561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7f84c7746-b2flw" event={"ID":"6cfd1ead-fea5-4f01-b7a8-944f010495ad","Type":"ContainerStarted","Data":"e902410a04b938355b7e1c490ed68e479619acea8539eb37f1af8791e717278c"} Jan 26 19:24:46 crc kubenswrapper[4787]: I0126 19:24:46.759426 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:46 crc kubenswrapper[4787]: I0126 19:24:46.762809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" event={"ID":"5dc2cf8e-097c-4099-a370-39e5a69eb862","Type":"ContainerStarted","Data":"0e881815b39293810bd310715d450fefbe912e6ab89d1fb6b02d78d1c91d601e"} Jan 26 19:24:46 crc kubenswrapper[4787]: I0126 19:24:46.763010 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:46 crc kubenswrapper[4787]: I0126 19:24:46.787214 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7f84c7746-b2flw" podStartSLOduration=2.8740788950000002 podStartE2EDuration="4.787188265s" podCreationTimestamp="2026-01-26 19:24:42 +0000 UTC" firstStartedPulling="2026-01-26 19:24:44.071910597 +0000 UTC m=+6052.779046730" lastFinishedPulling="2026-01-26 19:24:45.985019967 +0000 UTC m=+6054.692156100" observedRunningTime="2026-01-26 19:24:46.776618288 +0000 UTC m=+6055.483754421" watchObservedRunningTime="2026-01-26 19:24:46.787188265 +0000 UTC m=+6055.494324398" Jan 26 19:24:46 crc kubenswrapper[4787]: I0126 19:24:46.807507 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" podStartSLOduration=2.896677334 podStartE2EDuration="4.807484999s" podCreationTimestamp="2026-01-26 19:24:42 +0000 UTC" firstStartedPulling="2026-01-26 19:24:44.078416724 +0000 UTC m=+6052.785552857" lastFinishedPulling="2026-01-26 19:24:45.989224379 +0000 UTC m=+6054.696360522" observedRunningTime="2026-01-26 19:24:46.803180003 +0000 UTC m=+6055.510316146" watchObservedRunningTime="2026-01-26 19:24:46.807484999 +0000 UTC m=+6055.514621132" Jan 26 19:24:53 crc kubenswrapper[4787]: I0126 19:24:53.772568 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:54 crc kubenswrapper[4787]: I0126 19:24:54.682198 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7f84c7746-b2flw" Jan 26 19:24:54 crc kubenswrapper[4787]: I0126 19:24:54.838694 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5b98f465d7-qqj9m" Jan 26 19:24:55 crc kubenswrapper[4787]: I0126 19:24:55.650359 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b5b7ff8f-5f4tv" Jan 26 19:24:55 crc kubenswrapper[4787]: I0126 19:24:55.707354 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-655fbc65c-zp8cb"] Jan 26 19:24:55 crc kubenswrapper[4787]: I0126 19:24:55.707589 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon-log" containerID="cri-o://0f9ab83d2349a71967960d1673e6245a78af58a932df3ba9ae5b78f7c4957451" gracePeriod=30 Jan 26 19:24:55 crc kubenswrapper[4787]: I0126 19:24:55.708004 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" containerID="cri-o://3132cbb1395416c0894b5e772e4c43286622faf505f5fc9254937c3e38300c6b" gracePeriod=30 Jan 26 19:24:57 crc kubenswrapper[4787]: I0126 19:24:57.589813 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:24:57 crc kubenswrapper[4787]: E0126 19:24:57.590751 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:24:59 crc kubenswrapper[4787]: I0126 19:24:59.892219 4787 generic.go:334] "Generic (PLEG): container finished" podID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerID="3132cbb1395416c0894b5e772e4c43286622faf505f5fc9254937c3e38300c6b" exitCode=0 Jan 26 19:24:59 crc kubenswrapper[4787]: I0126 19:24:59.892327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655fbc65c-zp8cb" event={"ID":"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc","Type":"ContainerDied","Data":"3132cbb1395416c0894b5e772e4c43286622faf505f5fc9254937c3e38300c6b"} Jan 26 19:25:00 crc kubenswrapper[4787]: I0126 19:25:00.053562 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 26 19:25:00 crc kubenswrapper[4787]: I0126 19:25:00.169265 4787 scope.go:117] "RemoveContainer" containerID="c69d940a2488ee8d50357aeb98a2c5747bddc863f894bd0a4274e8ab7012f736" Jan 26 19:25:00 crc kubenswrapper[4787]: I0126 19:25:00.221063 4787 scope.go:117] "RemoveContainer" containerID="5609e0dd66bf317063044b328100b89b3775947bea9a25431ea46cf1711f99b2" Jan 26 19:25:00 crc kubenswrapper[4787]: I0126 19:25:00.298699 4787 scope.go:117] "RemoveContainer" containerID="aa828fd9b8fc393d71b24935b9ddc74f127e03451928bc807aec4eb6644f7641" Jan 26 19:25:03 crc kubenswrapper[4787]: I0126 19:25:03.149029 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5765f6b7db-rc55s" Jan 26 19:25:04 crc kubenswrapper[4787]: I0126 19:25:04.049210 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vhnqg"] Jan 26 19:25:04 crc kubenswrapper[4787]: I0126 19:25:04.076663 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-38b5-account-create-update-8l77l"] Jan 26 19:25:04 crc kubenswrapper[4787]: I0126 19:25:04.091388 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-38b5-account-create-update-8l77l"] Jan 26 19:25:04 crc kubenswrapper[4787]: I0126 19:25:04.099184 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vhnqg"] Jan 26 19:25:05 crc kubenswrapper[4787]: I0126 19:25:05.601859 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2633f61e-8554-4bd8-968f-bc6f4b8532ab" path="/var/lib/kubelet/pods/2633f61e-8554-4bd8-968f-bc6f4b8532ab/volumes" Jan 26 19:25:05 crc kubenswrapper[4787]: I0126 19:25:05.603737 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2966afba-49ec-4933-9c11-878ebdf1c724" path="/var/lib/kubelet/pods/2966afba-49ec-4933-9c11-878ebdf1c724/volumes" Jan 26 19:25:09 crc kubenswrapper[4787]: I0126 19:25:09.589320 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:25:09 crc kubenswrapper[4787]: E0126 19:25:09.589894 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:25:10 crc kubenswrapper[4787]: I0126 19:25:10.053441 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 26 19:25:11 crc kubenswrapper[4787]: I0126 19:25:11.039463 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-24hgp"] Jan 26 19:25:11 crc kubenswrapper[4787]: I0126 19:25:11.049106 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-24hgp"] Jan 26 19:25:11 crc kubenswrapper[4787]: I0126 19:25:11.610381 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0338e1-d85f-4da2-a41f-7791cdfeb108" path="/var/lib/kubelet/pods/cc0338e1-d85f-4da2-a41f-7791cdfeb108/volumes" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.482915 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55"] Jan 26 19:25:12 crc kubenswrapper[4787]: E0126 19:25:12.483799 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.483811 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" Jan 26 19:25:12 crc kubenswrapper[4787]: E0126 19:25:12.483849 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon-log" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.483855 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon-log" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.484079 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.484098 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="921ee18e-4ea9-4270-9f29-515a05ca5eff" containerName="horizon-log" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.485497 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.487827 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.522269 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55"] Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.604031 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd7x4\" (UniqueName: \"kubernetes.io/projected/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-kube-api-access-cd7x4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.604130 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.604340 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.706855 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.707109 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.707229 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd7x4\" (UniqueName: \"kubernetes.io/projected/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-kube-api-access-cd7x4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.708098 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.708433 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.727230 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd7x4\" (UniqueName: \"kubernetes.io/projected/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-kube-api-access-cd7x4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:12 crc kubenswrapper[4787]: I0126 19:25:12.804707 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:13 crc kubenswrapper[4787]: I0126 19:25:13.420116 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55"] Jan 26 19:25:13 crc kubenswrapper[4787]: W0126 19:25:13.424531 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3d2d7fa_6278_458e_8c4a_53cd612f13bb.slice/crio-05575bbabb1c698a62368a070ccc7299cb73ccff6c6811dbead1ea1cbe0672b3 WatchSource:0}: Error finding container 05575bbabb1c698a62368a070ccc7299cb73ccff6c6811dbead1ea1cbe0672b3: Status 404 returned error can't find the container with id 05575bbabb1c698a62368a070ccc7299cb73ccff6c6811dbead1ea1cbe0672b3 Jan 26 19:25:14 crc kubenswrapper[4787]: I0126 19:25:14.023818 4787 generic.go:334] "Generic (PLEG): container finished" podID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerID="54aad4d7d5ce084ccbf28eb4f88438192c4a633b171111659b41e64e0893f221" exitCode=0 Jan 26 19:25:14 crc kubenswrapper[4787]: I0126 19:25:14.023942 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" event={"ID":"a3d2d7fa-6278-458e-8c4a-53cd612f13bb","Type":"ContainerDied","Data":"54aad4d7d5ce084ccbf28eb4f88438192c4a633b171111659b41e64e0893f221"} Jan 26 19:25:14 crc kubenswrapper[4787]: I0126 19:25:14.024476 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" event={"ID":"a3d2d7fa-6278-458e-8c4a-53cd612f13bb","Type":"ContainerStarted","Data":"05575bbabb1c698a62368a070ccc7299cb73ccff6c6811dbead1ea1cbe0672b3"} Jan 26 19:25:19 crc kubenswrapper[4787]: I0126 19:25:19.066422 4787 generic.go:334] "Generic (PLEG): container finished" podID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerID="08708b7a44970aa3073faf41381caa4efe5e2157b14e38dda138b27a1084e68b" exitCode=0 Jan 26 19:25:19 crc kubenswrapper[4787]: I0126 19:25:19.066867 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" event={"ID":"a3d2d7fa-6278-458e-8c4a-53cd612f13bb","Type":"ContainerDied","Data":"08708b7a44970aa3073faf41381caa4efe5e2157b14e38dda138b27a1084e68b"} Jan 26 19:25:20 crc kubenswrapper[4787]: I0126 19:25:20.054749 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-655fbc65c-zp8cb" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 26 19:25:20 crc kubenswrapper[4787]: I0126 19:25:20.055212 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:25:20 crc kubenswrapper[4787]: I0126 19:25:20.081988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" event={"ID":"a3d2d7fa-6278-458e-8c4a-53cd612f13bb","Type":"ContainerStarted","Data":"c1aac9af3ace9f24ba9224ea2336ebc76f50b196b68365e35876732bc4cc2da8"} Jan 26 19:25:20 crc kubenswrapper[4787]: I0126 19:25:20.109179 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" podStartSLOduration=3.419793696 podStartE2EDuration="8.109151936s" podCreationTimestamp="2026-01-26 19:25:12 +0000 UTC" firstStartedPulling="2026-01-26 19:25:14.028427296 +0000 UTC m=+6082.735563429" lastFinishedPulling="2026-01-26 19:25:18.717785536 +0000 UTC m=+6087.424921669" observedRunningTime="2026-01-26 19:25:20.097544714 +0000 UTC m=+6088.804680867" watchObservedRunningTime="2026-01-26 19:25:20.109151936 +0000 UTC m=+6088.816288069" Jan 26 19:25:21 crc kubenswrapper[4787]: I0126 19:25:21.093399 4787 generic.go:334] "Generic (PLEG): container finished" podID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerID="c1aac9af3ace9f24ba9224ea2336ebc76f50b196b68365e35876732bc4cc2da8" exitCode=0 Jan 26 19:25:21 crc kubenswrapper[4787]: I0126 19:25:21.093591 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" event={"ID":"a3d2d7fa-6278-458e-8c4a-53cd612f13bb","Type":"ContainerDied","Data":"c1aac9af3ace9f24ba9224ea2336ebc76f50b196b68365e35876732bc4cc2da8"} Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.438795 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.589739 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:25:22 crc kubenswrapper[4787]: E0126 19:25:22.590453 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.602332 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-util\") pod \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.602394 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd7x4\" (UniqueName: \"kubernetes.io/projected/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-kube-api-access-cd7x4\") pod \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.602470 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-bundle\") pod \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\" (UID: \"a3d2d7fa-6278-458e-8c4a-53cd612f13bb\") " Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.605881 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-bundle" (OuterVolumeSpecName: "bundle") pod "a3d2d7fa-6278-458e-8c4a-53cd612f13bb" (UID: "a3d2d7fa-6278-458e-8c4a-53cd612f13bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.610180 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-kube-api-access-cd7x4" (OuterVolumeSpecName: "kube-api-access-cd7x4") pod "a3d2d7fa-6278-458e-8c4a-53cd612f13bb" (UID: "a3d2d7fa-6278-458e-8c4a-53cd612f13bb"). InnerVolumeSpecName "kube-api-access-cd7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.613246 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-util" (OuterVolumeSpecName: "util") pod "a3d2d7fa-6278-458e-8c4a-53cd612f13bb" (UID: "a3d2d7fa-6278-458e-8c4a-53cd612f13bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.706254 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd7x4\" (UniqueName: \"kubernetes.io/projected/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-kube-api-access-cd7x4\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.706308 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:22 crc kubenswrapper[4787]: I0126 19:25:22.706319 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3d2d7fa-6278-458e-8c4a-53cd612f13bb-util\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:23 crc kubenswrapper[4787]: I0126 19:25:23.117808 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" event={"ID":"a3d2d7fa-6278-458e-8c4a-53cd612f13bb","Type":"ContainerDied","Data":"05575bbabb1c698a62368a070ccc7299cb73ccff6c6811dbead1ea1cbe0672b3"} Jan 26 19:25:23 crc kubenswrapper[4787]: I0126 19:25:23.117872 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05575bbabb1c698a62368a070ccc7299cb73ccff6c6811dbead1ea1cbe0672b3" Jan 26 19:25:23 crc kubenswrapper[4787]: I0126 19:25:23.117877 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.146108 4787 generic.go:334] "Generic (PLEG): container finished" podID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerID="0f9ab83d2349a71967960d1673e6245a78af58a932df3ba9ae5b78f7c4957451" exitCode=137 Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.146187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655fbc65c-zp8cb" event={"ID":"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc","Type":"ContainerDied","Data":"0f9ab83d2349a71967960d1673e6245a78af58a932df3ba9ae5b78f7c4957451"} Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.147107 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655fbc65c-zp8cb" event={"ID":"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc","Type":"ContainerDied","Data":"d25e84e145ec5c2df676c4281eb16c0de4db7eaa436a4cc37a6da2fd3c628dd8"} Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.147134 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25e84e145ec5c2df676c4281eb16c0de4db7eaa436a4cc37a6da2fd3c628dd8" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.153675 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.278705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-scripts\") pod \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.279575 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-horizon-secret-key\") pod \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.279628 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-logs\") pod \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.279880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-config-data\") pod \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.279935 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xp48\" (UniqueName: \"kubernetes.io/projected/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-kube-api-access-9xp48\") pod \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\" (UID: \"ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc\") " Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.282280 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-logs" (OuterVolumeSpecName: "logs") pod "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" (UID: "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.285905 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" (UID: "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.289905 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-kube-api-access-9xp48" (OuterVolumeSpecName: "kube-api-access-9xp48") pod "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" (UID: "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc"). InnerVolumeSpecName "kube-api-access-9xp48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.309170 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-scripts" (OuterVolumeSpecName: "scripts") pod "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" (UID: "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.318089 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-config-data" (OuterVolumeSpecName: "config-data") pod "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" (UID: "ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.383018 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xp48\" (UniqueName: \"kubernetes.io/projected/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-kube-api-access-9xp48\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.383080 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.383094 4787 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.383105 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:26 crc kubenswrapper[4787]: I0126 19:25:26.383118 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:25:27 crc kubenswrapper[4787]: I0126 19:25:27.156134 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655fbc65c-zp8cb" Jan 26 19:25:27 crc kubenswrapper[4787]: I0126 19:25:27.206158 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-655fbc65c-zp8cb"] Jan 26 19:25:27 crc kubenswrapper[4787]: I0126 19:25:27.215211 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-655fbc65c-zp8cb"] Jan 26 19:25:27 crc kubenswrapper[4787]: I0126 19:25:27.602121 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" path="/var/lib/kubelet/pods/ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc/volumes" Jan 26 19:25:34 crc kubenswrapper[4787]: I0126 19:25:34.594171 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:25:34 crc kubenswrapper[4787]: E0126 19:25:34.598460 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.497125 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd"] Jan 26 19:25:35 crc kubenswrapper[4787]: E0126 19:25:35.497854 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="pull" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.497871 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="pull" Jan 26 19:25:35 crc kubenswrapper[4787]: E0126 19:25:35.497887 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.497895 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" Jan 26 19:25:35 crc kubenswrapper[4787]: E0126 19:25:35.497914 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="util" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.497920 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="util" Jan 26 19:25:35 crc kubenswrapper[4787]: E0126 19:25:35.497938 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="extract" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.497958 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="extract" Jan 26 19:25:35 crc kubenswrapper[4787]: E0126 19:25:35.497966 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon-log" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.497972 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon-log" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.498142 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.498170 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d2d7fa-6278-458e-8c4a-53cd612f13bb" containerName="extract" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.498181 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef0fd6-c0ab-4ec6-b1e6-3065a9e392fc" containerName="horizon-log" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.498858 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.501103 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.501266 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.501410 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-twflc" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.508223 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.589313 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zpl\" (UniqueName: \"kubernetes.io/projected/5464b021-25b7-421a-84fb-464912bd7891-kube-api-access-l8zpl\") pod \"obo-prometheus-operator-68bc856cb9-tlzxd\" (UID: \"5464b021-25b7-421a-84fb-464912bd7891\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.621657 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.623322 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.626318 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.626595 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2ljzd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.635403 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.637155 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.647767 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.660057 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.693189 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8zpl\" (UniqueName: \"kubernetes.io/projected/5464b021-25b7-421a-84fb-464912bd7891-kube-api-access-l8zpl\") pod \"obo-prometheus-operator-68bc856cb9-tlzxd\" (UID: \"5464b021-25b7-421a-84fb-464912bd7891\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.715249 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8zpl\" (UniqueName: \"kubernetes.io/projected/5464b021-25b7-421a-84fb-464912bd7891-kube-api-access-l8zpl\") pod \"obo-prometheus-operator-68bc856cb9-tlzxd\" (UID: \"5464b021-25b7-421a-84fb-464912bd7891\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.794939 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c9a4c7d-e1c8-4b61-b756-c32495dfc027-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf\" (UID: \"9c9a4c7d-e1c8-4b61-b756-c32495dfc027\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.795441 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97e77b51-0240-4d93-8966-5b1e733ccf08-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz\" (UID: \"97e77b51-0240-4d93-8966-5b1e733ccf08\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.795628 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97e77b51-0240-4d93-8966-5b1e733ccf08-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz\" (UID: \"97e77b51-0240-4d93-8966-5b1e733ccf08\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.795734 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c9a4c7d-e1c8-4b61-b756-c32495dfc027-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf\" (UID: \"9c9a4c7d-e1c8-4b61-b756-c32495dfc027\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.810082 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pb6sd"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.811673 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.813994 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-cbl94" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.814875 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.831063 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.840643 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pb6sd"] Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.897652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97e77b51-0240-4d93-8966-5b1e733ccf08-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz\" (UID: \"97e77b51-0240-4d93-8966-5b1e733ccf08\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.897699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c9a4c7d-e1c8-4b61-b756-c32495dfc027-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf\" (UID: \"9c9a4c7d-e1c8-4b61-b756-c32495dfc027\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.897761 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c9a4c7d-e1c8-4b61-b756-c32495dfc027-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf\" (UID: \"9c9a4c7d-e1c8-4b61-b756-c32495dfc027\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.897843 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkf2l\" (UniqueName: \"kubernetes.io/projected/3dcba907-3e69-4b6b-bdbf-89b17b09a8f1-kube-api-access-tkf2l\") pod \"observability-operator-59bdc8b94-pb6sd\" (UID: \"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1\") " pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.897870 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3dcba907-3e69-4b6b-bdbf-89b17b09a8f1-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pb6sd\" (UID: \"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1\") " pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.898040 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97e77b51-0240-4d93-8966-5b1e733ccf08-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz\" (UID: \"97e77b51-0240-4d93-8966-5b1e733ccf08\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.901320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c9a4c7d-e1c8-4b61-b756-c32495dfc027-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf\" (UID: \"9c9a4c7d-e1c8-4b61-b756-c32495dfc027\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.902746 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c9a4c7d-e1c8-4b61-b756-c32495dfc027-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf\" (UID: \"9c9a4c7d-e1c8-4b61-b756-c32495dfc027\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.902875 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97e77b51-0240-4d93-8966-5b1e733ccf08-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz\" (UID: \"97e77b51-0240-4d93-8966-5b1e733ccf08\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.903276 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97e77b51-0240-4d93-8966-5b1e733ccf08-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz\" (UID: \"97e77b51-0240-4d93-8966-5b1e733ccf08\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.950569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" Jan 26 19:25:35 crc kubenswrapper[4787]: I0126 19:25:35.970550 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.000459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkf2l\" (UniqueName: \"kubernetes.io/projected/3dcba907-3e69-4b6b-bdbf-89b17b09a8f1-kube-api-access-tkf2l\") pod \"observability-operator-59bdc8b94-pb6sd\" (UID: \"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1\") " pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.000536 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3dcba907-3e69-4b6b-bdbf-89b17b09a8f1-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pb6sd\" (UID: \"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1\") " pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.013771 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3dcba907-3e69-4b6b-bdbf-89b17b09a8f1-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pb6sd\" (UID: \"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1\") " pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.032656 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkf2l\" (UniqueName: \"kubernetes.io/projected/3dcba907-3e69-4b6b-bdbf-89b17b09a8f1-kube-api-access-tkf2l\") pod \"observability-operator-59bdc8b94-pb6sd\" (UID: \"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1\") " pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.066776 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-v7v9m"] Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.068667 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.080447 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-r6fxm" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.082647 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-v7v9m"] Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.173572 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.219355 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f8c4487-cd5a-456d-8a69-0b1296b4a687-openshift-service-ca\") pod \"perses-operator-5bf474d74f-v7v9m\" (UID: \"7f8c4487-cd5a-456d-8a69-0b1296b4a687\") " pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.219410 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6hd\" (UniqueName: \"kubernetes.io/projected/7f8c4487-cd5a-456d-8a69-0b1296b4a687-kube-api-access-vj6hd\") pod \"perses-operator-5bf474d74f-v7v9m\" (UID: \"7f8c4487-cd5a-456d-8a69-0b1296b4a687\") " pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.323904 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f8c4487-cd5a-456d-8a69-0b1296b4a687-openshift-service-ca\") pod \"perses-operator-5bf474d74f-v7v9m\" (UID: \"7f8c4487-cd5a-456d-8a69-0b1296b4a687\") " pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.323976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj6hd\" (UniqueName: \"kubernetes.io/projected/7f8c4487-cd5a-456d-8a69-0b1296b4a687-kube-api-access-vj6hd\") pod \"perses-operator-5bf474d74f-v7v9m\" (UID: \"7f8c4487-cd5a-456d-8a69-0b1296b4a687\") " pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.325073 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f8c4487-cd5a-456d-8a69-0b1296b4a687-openshift-service-ca\") pod \"perses-operator-5bf474d74f-v7v9m\" (UID: \"7f8c4487-cd5a-456d-8a69-0b1296b4a687\") " pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.352431 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj6hd\" (UniqueName: \"kubernetes.io/projected/7f8c4487-cd5a-456d-8a69-0b1296b4a687-kube-api-access-vj6hd\") pod \"perses-operator-5bf474d74f-v7v9m\" (UID: \"7f8c4487-cd5a-456d-8a69-0b1296b4a687\") " pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.474451 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.514903 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd"] Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.802370 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz"] Jan 26 19:25:36 crc kubenswrapper[4787]: W0126 19:25:36.803349 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e77b51_0240_4d93_8966_5b1e733ccf08.slice/crio-577fd53ffa33bd0af0dc790b91dd60b30f34a8973b83913e58c356b71c736504 WatchSource:0}: Error finding container 577fd53ffa33bd0af0dc790b91dd60b30f34a8973b83913e58c356b71c736504: Status 404 returned error can't find the container with id 577fd53ffa33bd0af0dc790b91dd60b30f34a8973b83913e58c356b71c736504 Jan 26 19:25:36 crc kubenswrapper[4787]: I0126 19:25:36.883865 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf"] Jan 26 19:25:37 crc kubenswrapper[4787]: W0126 19:25:37.008213 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f8c4487_cd5a_456d_8a69_0b1296b4a687.slice/crio-28886879da753c29d2e26719378e24845f8a87cac18fc4a56454959765824672 WatchSource:0}: Error finding container 28886879da753c29d2e26719378e24845f8a87cac18fc4a56454959765824672: Status 404 returned error can't find the container with id 28886879da753c29d2e26719378e24845f8a87cac18fc4a56454959765824672 Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.019995 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-v7v9m"] Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.034316 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pb6sd"] Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.300185 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" event={"ID":"5464b021-25b7-421a-84fb-464912bd7891","Type":"ContainerStarted","Data":"dd6cd391b8b059cd5afd2b743605c0701434ad0b214ee4f2ba4bfb94ed51c148"} Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.301892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" event={"ID":"9c9a4c7d-e1c8-4b61-b756-c32495dfc027","Type":"ContainerStarted","Data":"c7ac11a8836b02b133347d82e6a3379d466d38a04ff588a5cc3948f62a6c2efa"} Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.303401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" event={"ID":"97e77b51-0240-4d93-8966-5b1e733ccf08","Type":"ContainerStarted","Data":"577fd53ffa33bd0af0dc790b91dd60b30f34a8973b83913e58c356b71c736504"} Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.304708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" event={"ID":"7f8c4487-cd5a-456d-8a69-0b1296b4a687","Type":"ContainerStarted","Data":"28886879da753c29d2e26719378e24845f8a87cac18fc4a56454959765824672"} Jan 26 19:25:37 crc kubenswrapper[4787]: I0126 19:25:37.306059 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" event={"ID":"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1","Type":"ContainerStarted","Data":"28c1328c79b0e21ef630288971a34197a1adf358b1c8e9a142badd58f4013911"} Jan 26 19:25:49 crc kubenswrapper[4787]: I0126 19:25:49.591324 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:25:49 crc kubenswrapper[4787]: E0126 19:25:49.592311 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:25:51 crc kubenswrapper[4787]: E0126 19:25:51.063233 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 26 19:25:51 crc kubenswrapper[4787]: E0126 19:25:51.063659 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj6hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-v7v9m_openshift-operators(7f8c4487-cd5a-456d-8a69-0b1296b4a687): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 19:25:51 crc kubenswrapper[4787]: E0126 19:25:51.064879 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" podUID="7f8c4487-cd5a-456d-8a69-0b1296b4a687" Jan 26 19:25:51 crc kubenswrapper[4787]: E0126 19:25:51.479487 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" podUID="7f8c4487-cd5a-456d-8a69-0b1296b4a687" Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.487488 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" event={"ID":"5464b021-25b7-421a-84fb-464912bd7891","Type":"ContainerStarted","Data":"5f947e86b17c903c14365354248063534303f6c2c1616336b861be5262814cfa"} Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.490456 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" event={"ID":"9c9a4c7d-e1c8-4b61-b756-c32495dfc027","Type":"ContainerStarted","Data":"232743f0a96c2588019a761c7ca08622f7148ea829ffbd714f42f11aef86759b"} Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.491971 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" event={"ID":"97e77b51-0240-4d93-8966-5b1e733ccf08","Type":"ContainerStarted","Data":"1c43e38756678cfaef2d4e77231d4d98964626a29134221a515f4d0fd25e309c"} Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.494448 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" event={"ID":"3dcba907-3e69-4b6b-bdbf-89b17b09a8f1","Type":"ContainerStarted","Data":"cbf35be0f3523bec74eb9f5eb672048342b5eb776e409cf64ef22c557fce43af"} Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.495052 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.496985 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.508716 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tlzxd" podStartSLOduration=2.889567841 podStartE2EDuration="17.508694476s" podCreationTimestamp="2026-01-26 19:25:35 +0000 UTC" firstStartedPulling="2026-01-26 19:25:36.536991495 +0000 UTC m=+6105.244127618" lastFinishedPulling="2026-01-26 19:25:51.15611812 +0000 UTC m=+6119.863254253" observedRunningTime="2026-01-26 19:25:52.505522469 +0000 UTC m=+6121.212658662" watchObservedRunningTime="2026-01-26 19:25:52.508694476 +0000 UTC m=+6121.215830599" Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.538422 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz" podStartSLOduration=3.247619813 podStartE2EDuration="17.538402597s" podCreationTimestamp="2026-01-26 19:25:35 +0000 UTC" firstStartedPulling="2026-01-26 19:25:36.823638022 +0000 UTC m=+6105.530774155" lastFinishedPulling="2026-01-26 19:25:51.114420816 +0000 UTC m=+6119.821556939" observedRunningTime="2026-01-26 19:25:52.535071527 +0000 UTC m=+6121.242207660" watchObservedRunningTime="2026-01-26 19:25:52.538402597 +0000 UTC m=+6121.245538730" Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.562461 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-pb6sd" podStartSLOduration=3.4436505090000002 podStartE2EDuration="17.562440003s" podCreationTimestamp="2026-01-26 19:25:35 +0000 UTC" firstStartedPulling="2026-01-26 19:25:37.037311995 +0000 UTC m=+6105.744448128" lastFinishedPulling="2026-01-26 19:25:51.156101479 +0000 UTC m=+6119.863237622" observedRunningTime="2026-01-26 19:25:52.559022349 +0000 UTC m=+6121.266158492" watchObservedRunningTime="2026-01-26 19:25:52.562440003 +0000 UTC m=+6121.269576136" Jan 26 19:25:52 crc kubenswrapper[4787]: I0126 19:25:52.590514 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf" podStartSLOduration=3.35401127 podStartE2EDuration="17.590488984s" podCreationTimestamp="2026-01-26 19:25:35 +0000 UTC" firstStartedPulling="2026-01-26 19:25:36.877192474 +0000 UTC m=+6105.584328607" lastFinishedPulling="2026-01-26 19:25:51.113670188 +0000 UTC m=+6119.820806321" observedRunningTime="2026-01-26 19:25:52.58210407 +0000 UTC m=+6121.289240203" watchObservedRunningTime="2026-01-26 19:25:52.590488984 +0000 UTC m=+6121.297625107" Jan 26 19:26:00 crc kubenswrapper[4787]: I0126 19:26:00.446735 4787 scope.go:117] "RemoveContainer" containerID="5db8df89b67cef9daf295fc187e0ffd77a966428e5f83dc79d4be4973b1981af" Jan 26 19:26:00 crc kubenswrapper[4787]: I0126 19:26:00.475374 4787 scope.go:117] "RemoveContainer" containerID="332d79b790b613ba3bb665983e627f2e882653a0dd639ae87ba41993082a6416" Jan 26 19:26:00 crc kubenswrapper[4787]: I0126 19:26:00.520070 4787 scope.go:117] "RemoveContainer" containerID="fd5934e5ff2a9a6f894fecd3c576d1a0e0be1b4d865361dde1ce28c869c6f8d2" Jan 26 19:26:00 crc kubenswrapper[4787]: I0126 19:26:00.589994 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:26:00 crc kubenswrapper[4787]: E0126 19:26:00.590872 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:26:06 crc kubenswrapper[4787]: I0126 19:26:06.629273 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" event={"ID":"7f8c4487-cd5a-456d-8a69-0b1296b4a687","Type":"ContainerStarted","Data":"c08b1c42f1e668da6934bece987d1e5bf5914e1e980ef4930cbc211d588d3d29"} Jan 26 19:26:06 crc kubenswrapper[4787]: I0126 19:26:06.629932 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:26:06 crc kubenswrapper[4787]: I0126 19:26:06.654117 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" podStartSLOduration=2.062456633 podStartE2EDuration="30.654095537s" podCreationTimestamp="2026-01-26 19:25:36 +0000 UTC" firstStartedPulling="2026-01-26 19:25:37.013375024 +0000 UTC m=+6105.720511157" lastFinishedPulling="2026-01-26 19:26:05.605013928 +0000 UTC m=+6134.312150061" observedRunningTime="2026-01-26 19:26:06.650066509 +0000 UTC m=+6135.357202662" watchObservedRunningTime="2026-01-26 19:26:06.654095537 +0000 UTC m=+6135.361231670" Jan 26 19:26:08 crc kubenswrapper[4787]: I0126 19:26:08.058376 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-p64bx"] Jan 26 19:26:08 crc kubenswrapper[4787]: I0126 19:26:08.072619 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7hrtx"] Jan 26 19:26:08 crc kubenswrapper[4787]: I0126 19:26:08.080674 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pphlr"] Jan 26 19:26:08 crc kubenswrapper[4787]: I0126 19:26:08.088273 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7hrtx"] Jan 26 19:26:08 crc kubenswrapper[4787]: I0126 19:26:08.096886 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-p64bx"] Jan 26 19:26:08 crc kubenswrapper[4787]: I0126 19:26:08.105069 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pphlr"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.043883 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f84a-account-create-update-l89g9"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.055398 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f61a-account-create-update-74jw7"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.067147 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f61a-account-create-update-74jw7"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.075538 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f84a-account-create-update-l89g9"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.083345 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f01-account-create-update-r82zn"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.090516 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6f01-account-create-update-r82zn"] Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.601575 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f90eeb4-4e9a-4568-8280-357f44085201" path="/var/lib/kubelet/pods/0f90eeb4-4e9a-4568-8280-357f44085201/volumes" Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.602249 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf4c986-f22b-4694-a10c-36fb04249541" path="/var/lib/kubelet/pods/1bf4c986-f22b-4694-a10c-36fb04249541/volumes" Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.602890 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ed44f7-bff4-4567-8a8c-0a82410634e2" path="/var/lib/kubelet/pods/81ed44f7-bff4-4567-8a8c-0a82410634e2/volumes" Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.603672 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed94e4d-5ab3-40f0-9cd2-d95ea215370b" path="/var/lib/kubelet/pods/9ed94e4d-5ab3-40f0-9cd2-d95ea215370b/volumes" Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.604935 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad611fd-34d0-4806-8eab-01472e06fd17" path="/var/lib/kubelet/pods/aad611fd-34d0-4806-8eab-01472e06fd17/volumes" Jan 26 19:26:09 crc kubenswrapper[4787]: I0126 19:26:09.605548 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04cb77e-3029-4aff-a4cb-105699eb3fbe" path="/var/lib/kubelet/pods/f04cb77e-3029-4aff-a4cb-105699eb3fbe/volumes" Jan 26 19:26:15 crc kubenswrapper[4787]: I0126 19:26:15.589732 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:26:15 crc kubenswrapper[4787]: E0126 19:26:15.590262 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:26:16 crc kubenswrapper[4787]: I0126 19:26:16.480216 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-v7v9m" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.291805 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.292653 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="6362959b-77e9-45ad-b697-6a4978a116d4" containerName="openstackclient" containerID="cri-o://29ed3d6f056a2fe140b016e801c297532d7fb34644b992d85649fb07018896f6" gracePeriod=2 Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.302871 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.318119 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: E0126 19:26:18.318508 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6362959b-77e9-45ad-b697-6a4978a116d4" containerName="openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.318526 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6362959b-77e9-45ad-b697-6a4978a116d4" containerName="openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.318730 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6362959b-77e9-45ad-b697-6a4978a116d4" containerName="openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.319373 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.322389 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6362959b-77e9-45ad-b697-6a4978a116d4" podUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.332682 4787 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0baa6332-c732-44e5-9c97-219ab6fbcd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T19:26:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T19:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T19:26:18Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T19:26:18Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d69wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T19:26:18Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.335977 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.340897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69wh\" (UniqueName: \"kubernetes.io/projected/0baa6332-c732-44e5-9c97-219ab6fbcd53-kube-api-access-d69wh\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.341007 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config-secret\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.341229 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.345084 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: E0126 19:26:18.354759 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-d69wh openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.366325 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.367553 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.372798 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" podUID="9e0d751f-3670-4d36-8836-2b4812a78127" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.383125 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.443748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9e0d751f-3670-4d36-8836-2b4812a78127-openstack-config\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.443888 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.443912 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nmr\" (UniqueName: \"kubernetes.io/projected/9e0d751f-3670-4d36-8836-2b4812a78127-kube-api-access-t5nmr\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.443960 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69wh\" (UniqueName: \"kubernetes.io/projected/0baa6332-c732-44e5-9c97-219ab6fbcd53-kube-api-access-d69wh\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.443993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config-secret\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.444040 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9e0d751f-3670-4d36-8836-2b4812a78127-openstack-config-secret\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.444972 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: E0126 19:26:18.447405 4787 projected.go:194] Error preparing data for projected volume kube-api-access-d69wh for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (0baa6332-c732-44e5-9c97-219ab6fbcd53) does not match the UID in record. The object might have been deleted and then recreated Jan 26 19:26:18 crc kubenswrapper[4787]: E0126 19:26:18.447470 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0baa6332-c732-44e5-9c97-219ab6fbcd53-kube-api-access-d69wh podName:0baa6332-c732-44e5-9c97-219ab6fbcd53 nodeName:}" failed. No retries permitted until 2026-01-26 19:26:18.94745377 +0000 UTC m=+6147.654589903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d69wh" (UniqueName: "kubernetes.io/projected/0baa6332-c732-44e5-9c97-219ab6fbcd53-kube-api-access-d69wh") pod "openstackclient" (UID: "0baa6332-c732-44e5-9c97-219ab6fbcd53") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (0baa6332-c732-44e5-9c97-219ab6fbcd53) does not match the UID in record. The object might have been deleted and then recreated Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.471170 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config-secret\") pod \"openstackclient\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.546031 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9e0d751f-3670-4d36-8836-2b4812a78127-openstack-config-secret\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.546107 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9e0d751f-3670-4d36-8836-2b4812a78127-openstack-config\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.546211 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nmr\" (UniqueName: \"kubernetes.io/projected/9e0d751f-3670-4d36-8836-2b4812a78127-kube-api-access-t5nmr\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.548198 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9e0d751f-3670-4d36-8836-2b4812a78127-openstack-config\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.552816 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.553362 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9e0d751f-3670-4d36-8836-2b4812a78127-openstack-config-secret\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.554147 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.556129 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-v85t9" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.565089 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.581707 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nmr\" (UniqueName: \"kubernetes.io/projected/9e0d751f-3670-4d36-8836-2b4812a78127-kube-api-access-t5nmr\") pod \"openstackclient\" (UID: \"9e0d751f-3670-4d36-8836-2b4812a78127\") " pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.685373 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.762742 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pth5k\" (UniqueName: \"kubernetes.io/projected/67828f47-7882-45f4-bec7-4c0de16894a4-kube-api-access-pth5k\") pod \"kube-state-metrics-0\" (UID: \"67828f47-7882-45f4-bec7-4c0de16894a4\") " pod="openstack/kube-state-metrics-0" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.784918 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.791339 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" podUID="9e0d751f-3670-4d36-8836-2b4812a78127" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.823899 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.832473 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" podUID="9e0d751f-3670-4d36-8836-2b4812a78127" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.869082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pth5k\" (UniqueName: \"kubernetes.io/projected/67828f47-7882-45f4-bec7-4c0de16894a4-kube-api-access-pth5k\") pod \"kube-state-metrics-0\" (UID: \"67828f47-7882-45f4-bec7-4c0de16894a4\") " pod="openstack/kube-state-metrics-0" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.869277 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d69wh\" (UniqueName: \"kubernetes.io/projected/0baa6332-c732-44e5-9c97-219ab6fbcd53-kube-api-access-d69wh\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.899401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pth5k\" (UniqueName: \"kubernetes.io/projected/67828f47-7882-45f4-bec7-4c0de16894a4-kube-api-access-pth5k\") pod \"kube-state-metrics-0\" (UID: \"67828f47-7882-45f4-bec7-4c0de16894a4\") " pod="openstack/kube-state-metrics-0" Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.985379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config\") pod \"0baa6332-c732-44e5-9c97-219ab6fbcd53\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.985867 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config-secret\") pod \"0baa6332-c732-44e5-9c97-219ab6fbcd53\" (UID: \"0baa6332-c732-44e5-9c97-219ab6fbcd53\") " Jan 26 19:26:18 crc kubenswrapper[4787]: I0126 19:26:18.988307 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0baa6332-c732-44e5-9c97-219ab6fbcd53" (UID: "0baa6332-c732-44e5-9c97-219ab6fbcd53"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.006048 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.006890 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.023163 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0baa6332-c732-44e5-9c97-219ab6fbcd53" (UID: "0baa6332-c732-44e5-9c97-219ab6fbcd53"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.112202 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0baa6332-c732-44e5-9c97-219ab6fbcd53-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.418893 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.423152 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.427033 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.427227 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.427347 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.427383 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.427590 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-mz2tn" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.445699 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.520873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tvc\" (UniqueName: \"kubernetes.io/projected/9edb1909-c634-429b-b9cd-dc59167c9850-kube-api-access-s9tvc\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.520983 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9edb1909-c634-429b-b9cd-dc59167c9850-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.521073 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9edb1909-c634-429b-b9cd-dc59167c9850-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.521126 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.521148 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.521173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9edb1909-c634-429b-b9cd-dc59167c9850-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.521204 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.613188 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" path="/var/lib/kubelet/pods/0baa6332-c732-44e5-9c97-219ab6fbcd53/volumes" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tvc\" (UniqueName: \"kubernetes.io/projected/9edb1909-c634-429b-b9cd-dc59167c9850-kube-api-access-s9tvc\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623602 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9edb1909-c634-429b-b9cd-dc59167c9850-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623707 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9edb1909-c634-429b-b9cd-dc59167c9850-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623763 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9edb1909-c634-429b-b9cd-dc59167c9850-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.623848 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.628894 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9edb1909-c634-429b-b9cd-dc59167c9850-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.631693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9edb1909-c634-429b-b9cd-dc59167c9850-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.632083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.634578 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.635827 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9edb1909-c634-429b-b9cd-dc59167c9850-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.636756 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.645646 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9edb1909-c634-429b-b9cd-dc59167c9850-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.700052 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tvc\" (UniqueName: \"kubernetes.io/projected/9edb1909-c634-429b-b9cd-dc59167c9850-kube-api-access-s9tvc\") pod \"alertmanager-metric-storage-0\" (UID: \"9edb1909-c634-429b-b9cd-dc59167c9850\") " pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.779810 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.785879 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:19 crc kubenswrapper[4787]: W0126 19:26:19.796612 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67828f47_7882_45f4_bec7_4c0de16894a4.slice/crio-c2361800a671b3ce507f251e7e8f76a47faa621abcfcf9bf70731e389b056e1f WatchSource:0}: Error finding container c2361800a671b3ce507f251e7e8f76a47faa621abcfcf9bf70731e389b056e1f: Status 404 returned error can't find the container with id c2361800a671b3ce507f251e7e8f76a47faa621abcfcf9bf70731e389b056e1f Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.801112 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.801872 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9e0d751f-3670-4d36-8836-2b4812a78127","Type":"ContainerStarted","Data":"e8720a98500c4064657e437ae0c278c703f7b06954ea17c61d31e258f8e9c402"} Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.808092 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" podUID="9e0d751f-3670-4d36-8836-2b4812a78127" Jan 26 19:26:19 crc kubenswrapper[4787]: I0126 19:26:19.864250 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0baa6332-c732-44e5-9c97-219ab6fbcd53" podUID="9e0d751f-3670-4d36-8836-2b4812a78127" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.021912 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.025251 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.031591 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.031640 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.031861 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.031911 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qx68l" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.032008 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.032130 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.032153 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.032225 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.032583 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136116 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136269 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136320 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7c3b3fd-be4f-4013-9590-9b5640e0b224-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136344 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136375 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0689c45-67fc-4c19-961d-d0805845e76a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0689c45-67fc-4c19-961d-d0805845e76a\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk66x\" (UniqueName: \"kubernetes.io/projected/c7c3b3fd-be4f-4013-9590-9b5640e0b224-kube-api-access-fk66x\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136521 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7c3b3fd-be4f-4013-9590-9b5640e0b224-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136598 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.136668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238152 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238235 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238319 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238378 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238434 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7c3b3fd-be4f-4013-9590-9b5640e0b224-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0689c45-67fc-4c19-961d-d0805845e76a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0689c45-67fc-4c19-961d-d0805845e76a\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk66x\" (UniqueName: \"kubernetes.io/projected/c7c3b3fd-be4f-4013-9590-9b5640e0b224-kube-api-access-fk66x\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.238555 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7c3b3fd-be4f-4013-9590-9b5640e0b224-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.239412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.239730 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.240415 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c7c3b3fd-be4f-4013-9590-9b5640e0b224-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.242527 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.242586 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7c3b3fd-be4f-4013-9590-9b5640e0b224-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.243876 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.244602 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.244633 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0689c45-67fc-4c19-961d-d0805845e76a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0689c45-67fc-4c19-961d-d0805845e76a\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78f79f7d49b5181c38424f139e58b869dfd580d35cd4dc2d1f16d1097dc36e26/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.244723 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7c3b3fd-be4f-4013-9590-9b5640e0b224-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.245083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7c3b3fd-be4f-4013-9590-9b5640e0b224-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.265203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk66x\" (UniqueName: \"kubernetes.io/projected/c7c3b3fd-be4f-4013-9590-9b5640e0b224-kube-api-access-fk66x\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.328648 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0689c45-67fc-4c19-961d-d0805845e76a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0689c45-67fc-4c19-961d-d0805845e76a\") pod \"prometheus-metric-storage-0\" (UID: \"c7c3b3fd-be4f-4013-9590-9b5640e0b224\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.343817 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.345071 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:20 crc kubenswrapper[4787]: W0126 19:26:20.372931 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9edb1909_c634_429b_b9cd_dc59167c9850.slice/crio-2208a46ba7c4bffea8dc8f8ddeed5b08b14be0e9a5ed35f63ce4a0a00063b896 WatchSource:0}: Error finding container 2208a46ba7c4bffea8dc8f8ddeed5b08b14be0e9a5ed35f63ce4a0a00063b896: Status 404 returned error can't find the container with id 2208a46ba7c4bffea8dc8f8ddeed5b08b14be0e9a5ed35f63ce4a0a00063b896 Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.860401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9e0d751f-3670-4d36-8836-2b4812a78127","Type":"ContainerStarted","Data":"ddff1e47cf2476844483081b0065a7f4bfaa91bf55b9b229d3ee316391f2b47e"} Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.863626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67828f47-7882-45f4-bec7-4c0de16894a4","Type":"ContainerStarted","Data":"54dbf903f4094d45e7242629eeaff815a6e1fe88e60740ed85bcf498132eafb5"} Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.863662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67828f47-7882-45f4-bec7-4c0de16894a4","Type":"ContainerStarted","Data":"c2361800a671b3ce507f251e7e8f76a47faa621abcfcf9bf70731e389b056e1f"} Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.864013 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.868614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9edb1909-c634-429b-b9cd-dc59167c9850","Type":"ContainerStarted","Data":"2208a46ba7c4bffea8dc8f8ddeed5b08b14be0e9a5ed35f63ce4a0a00063b896"} Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.871191 4787 generic.go:334] "Generic (PLEG): container finished" podID="6362959b-77e9-45ad-b697-6a4978a116d4" containerID="29ed3d6f056a2fe140b016e801c297532d7fb34644b992d85649fb07018896f6" exitCode=137 Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.871239 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19336f6defbc38df5a67dd24d50595bad6694ec20c2e1c3ad0ae7db5204a7368" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.883293 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.8832704959999997 podStartE2EDuration="2.883270496s" podCreationTimestamp="2026-01-26 19:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:26:20.87811346 +0000 UTC m=+6149.585249593" watchObservedRunningTime="2026-01-26 19:26:20.883270496 +0000 UTC m=+6149.590406629" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.909570 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.42429783 podStartE2EDuration="2.909539945s" podCreationTimestamp="2026-01-26 19:26:18 +0000 UTC" firstStartedPulling="2026-01-26 19:26:19.819461248 +0000 UTC m=+6148.526597381" lastFinishedPulling="2026-01-26 19:26:20.304703363 +0000 UTC m=+6149.011839496" observedRunningTime="2026-01-26 19:26:20.902368159 +0000 UTC m=+6149.609504292" watchObservedRunningTime="2026-01-26 19:26:20.909539945 +0000 UTC m=+6149.616676078" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.932129 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.936861 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:26:20 crc kubenswrapper[4787]: W0126 19:26:20.952921 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7c3b3fd_be4f_4013_9590_9b5640e0b224.slice/crio-acc5c243374a03352795ea46d596c51ff562c38ae1e0c62bebc8cdd8cc6aea80 WatchSource:0}: Error finding container acc5c243374a03352795ea46d596c51ff562c38ae1e0c62bebc8cdd8cc6aea80: Status 404 returned error can't find the container with id acc5c243374a03352795ea46d596c51ff562c38ae1e0c62bebc8cdd8cc6aea80 Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.955364 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kk6n\" (UniqueName: \"kubernetes.io/projected/6362959b-77e9-45ad-b697-6a4978a116d4-kube-api-access-4kk6n\") pod \"6362959b-77e9-45ad-b697-6a4978a116d4\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.955460 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config-secret\") pod \"6362959b-77e9-45ad-b697-6a4978a116d4\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.955572 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config\") pod \"6362959b-77e9-45ad-b697-6a4978a116d4\" (UID: \"6362959b-77e9-45ad-b697-6a4978a116d4\") " Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.963085 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6362959b-77e9-45ad-b697-6a4978a116d4-kube-api-access-4kk6n" (OuterVolumeSpecName: "kube-api-access-4kk6n") pod "6362959b-77e9-45ad-b697-6a4978a116d4" (UID: "6362959b-77e9-45ad-b697-6a4978a116d4"). InnerVolumeSpecName "kube-api-access-4kk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:26:20 crc kubenswrapper[4787]: I0126 19:26:20.991679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6362959b-77e9-45ad-b697-6a4978a116d4" (UID: "6362959b-77e9-45ad-b697-6a4978a116d4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.036124 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6362959b-77e9-45ad-b697-6a4978a116d4" (UID: "6362959b-77e9-45ad-b697-6a4978a116d4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.060728 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.060778 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6362959b-77e9-45ad-b697-6a4978a116d4-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.060794 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kk6n\" (UniqueName: \"kubernetes.io/projected/6362959b-77e9-45ad-b697-6a4978a116d4-kube-api-access-4kk6n\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.601933 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6362959b-77e9-45ad-b697-6a4978a116d4" path="/var/lib/kubelet/pods/6362959b-77e9-45ad-b697-6a4978a116d4/volumes" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.879602 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:26:21 crc kubenswrapper[4787]: I0126 19:26:21.879755 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7c3b3fd-be4f-4013-9590-9b5640e0b224","Type":"ContainerStarted","Data":"acc5c243374a03352795ea46d596c51ff562c38ae1e0c62bebc8cdd8cc6aea80"} Jan 26 19:26:22 crc kubenswrapper[4787]: I0126 19:26:22.626112 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfhzc"] Jan 26 19:26:22 crc kubenswrapper[4787]: I0126 19:26:22.650984 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfhzc"] Jan 26 19:26:23 crc kubenswrapper[4787]: I0126 19:26:23.627562 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c23b6c5-9693-44f7-afcb-826d1c2370df" path="/var/lib/kubelet/pods/4c23b6c5-9693-44f7-afcb-826d1c2370df/volumes" Jan 26 19:26:27 crc kubenswrapper[4787]: I0126 19:26:27.590581 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:26:27 crc kubenswrapper[4787]: E0126 19:26:27.591521 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:26:28 crc kubenswrapper[4787]: I0126 19:26:28.965322 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9edb1909-c634-429b-b9cd-dc59167c9850","Type":"ContainerStarted","Data":"7a1b0dd037d2263832a69ab0a5439d094c5f05bfa29f1865e08128f29d5f5344"} Jan 26 19:26:28 crc kubenswrapper[4787]: I0126 19:26:28.967907 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7c3b3fd-be4f-4013-9590-9b5640e0b224","Type":"ContainerStarted","Data":"25e27c4a159e4f4a84f1726a88c6721d090e1e9f91a3d438cff2d66db243e9aa"} Jan 26 19:26:29 crc kubenswrapper[4787]: I0126 19:26:29.012642 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 19:26:36 crc kubenswrapper[4787]: I0126 19:26:36.040333 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9edb1909-c634-429b-b9cd-dc59167c9850","Type":"ContainerDied","Data":"7a1b0dd037d2263832a69ab0a5439d094c5f05bfa29f1865e08128f29d5f5344"} Jan 26 19:26:36 crc kubenswrapper[4787]: I0126 19:26:36.040329 4787 generic.go:334] "Generic (PLEG): container finished" podID="9edb1909-c634-429b-b9cd-dc59167c9850" containerID="7a1b0dd037d2263832a69ab0a5439d094c5f05bfa29f1865e08128f29d5f5344" exitCode=0 Jan 26 19:26:37 crc kubenswrapper[4787]: I0126 19:26:37.052647 4787 generic.go:334] "Generic (PLEG): container finished" podID="c7c3b3fd-be4f-4013-9590-9b5640e0b224" containerID="25e27c4a159e4f4a84f1726a88c6721d090e1e9f91a3d438cff2d66db243e9aa" exitCode=0 Jan 26 19:26:37 crc kubenswrapper[4787]: I0126 19:26:37.052749 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7c3b3fd-be4f-4013-9590-9b5640e0b224","Type":"ContainerDied","Data":"25e27c4a159e4f4a84f1726a88c6721d090e1e9f91a3d438cff2d66db243e9aa"} Jan 26 19:26:39 crc kubenswrapper[4787]: I0126 19:26:39.075071 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9edb1909-c634-429b-b9cd-dc59167c9850","Type":"ContainerStarted","Data":"78cc047efe4a273ec00f452b2f3f32a19e64144beb122541e0acac330709df90"} Jan 26 19:26:39 crc kubenswrapper[4787]: I0126 19:26:39.589535 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:26:39 crc kubenswrapper[4787]: E0126 19:26:39.590173 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:26:41 crc kubenswrapper[4787]: I0126 19:26:41.054083 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d25hh"] Jan 26 19:26:41 crc kubenswrapper[4787]: I0126 19:26:41.067000 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dkn5r"] Jan 26 19:26:41 crc kubenswrapper[4787]: I0126 19:26:41.075668 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dkn5r"] Jan 26 19:26:41 crc kubenswrapper[4787]: I0126 19:26:41.086165 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d25hh"] Jan 26 19:26:41 crc kubenswrapper[4787]: I0126 19:26:41.600569 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593f2f21-8ec9-4416-bc74-578add751f8f" path="/var/lib/kubelet/pods/593f2f21-8ec9-4416-bc74-578add751f8f/volumes" Jan 26 19:26:41 crc kubenswrapper[4787]: I0126 19:26:41.601550 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10d393a-1d4b-428f-b2e8-aecb524c0f36" path="/var/lib/kubelet/pods/b10d393a-1d4b-428f-b2e8-aecb524c0f36/volumes" Jan 26 19:26:43 crc kubenswrapper[4787]: I0126 19:26:43.169317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9edb1909-c634-429b-b9cd-dc59167c9850","Type":"ContainerStarted","Data":"197ba152ebcd636ad084e619d676e98383994801b463ece521985c4ef5c5cc45"} Jan 26 19:26:43 crc kubenswrapper[4787]: I0126 19:26:43.170007 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:43 crc kubenswrapper[4787]: I0126 19:26:43.173126 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 26 19:26:43 crc kubenswrapper[4787]: I0126 19:26:43.194811 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.867490608 podStartE2EDuration="24.194793555s" podCreationTimestamp="2026-01-26 19:26:19 +0000 UTC" firstStartedPulling="2026-01-26 19:26:20.377327948 +0000 UTC m=+6149.084464081" lastFinishedPulling="2026-01-26 19:26:38.704630895 +0000 UTC m=+6167.411767028" observedRunningTime="2026-01-26 19:26:43.189049765 +0000 UTC m=+6171.896185898" watchObservedRunningTime="2026-01-26 19:26:43.194793555 +0000 UTC m=+6171.901929688" Jan 26 19:26:45 crc kubenswrapper[4787]: I0126 19:26:45.197713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7c3b3fd-be4f-4013-9590-9b5640e0b224","Type":"ContainerStarted","Data":"c63320db76486757e33134d11912f611554915bd2c8b5d8f227336533ad221cf"} Jan 26 19:26:49 crc kubenswrapper[4787]: I0126 19:26:49.235186 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7c3b3fd-be4f-4013-9590-9b5640e0b224","Type":"ContainerStarted","Data":"3dc872710fcb115bfc88b5f57be4157a96a1a77748448b65d9cb56360d8ccb1a"} Jan 26 19:26:52 crc kubenswrapper[4787]: I0126 19:26:52.278334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c7c3b3fd-be4f-4013-9590-9b5640e0b224","Type":"ContainerStarted","Data":"00c2c7f390f40dc3e4652312fc3eac0ee3c6ccca030da1d2f8e63f10116fbfc5"} Jan 26 19:26:52 crc kubenswrapper[4787]: I0126 19:26:52.323827 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.994001144 podStartE2EDuration="34.323808137s" podCreationTimestamp="2026-01-26 19:26:18 +0000 UTC" firstStartedPulling="2026-01-26 19:26:20.962195944 +0000 UTC m=+6149.669332077" lastFinishedPulling="2026-01-26 19:26:51.292002937 +0000 UTC m=+6179.999139070" observedRunningTime="2026-01-26 19:26:52.320529987 +0000 UTC m=+6181.027666120" watchObservedRunningTime="2026-01-26 19:26:52.323808137 +0000 UTC m=+6181.030944270" Jan 26 19:26:54 crc kubenswrapper[4787]: I0126 19:26:54.589869 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:26:54 crc kubenswrapper[4787]: E0126 19:26:54.590896 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:26:55 crc kubenswrapper[4787]: I0126 19:26:55.346143 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.097707 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.101639 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.104509 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.105122 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.110444 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.268983 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-scripts\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.269028 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-run-httpd\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.269074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-config-data\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.269099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.269315 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.269400 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-log-httpd\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.269526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mbk5\" (UniqueName: \"kubernetes.io/projected/53b238b3-803d-4284-bfa8-c4e7657dfefb-kube-api-access-4mbk5\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-scripts\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371378 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-run-httpd\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-config-data\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371508 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-log-httpd\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.371555 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mbk5\" (UniqueName: \"kubernetes.io/projected/53b238b3-803d-4284-bfa8-c4e7657dfefb-kube-api-access-4mbk5\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.373318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-run-httpd\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.373363 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-log-httpd\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.378843 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-config-data\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.382747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.382801 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.383606 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-scripts\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.391075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mbk5\" (UniqueName: \"kubernetes.io/projected/53b238b3-803d-4284-bfa8-c4e7657dfefb-kube-api-access-4mbk5\") pod \"ceilometer-0\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.421055 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:26:58 crc kubenswrapper[4787]: I0126 19:26:58.928927 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:26:58 crc kubenswrapper[4787]: W0126 19:26:58.937357 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53b238b3_803d_4284_bfa8_c4e7657dfefb.slice/crio-601176355db7bd9f603b3928a3be07e11e6b6a4ea53988a51b550258b0197dc6 WatchSource:0}: Error finding container 601176355db7bd9f603b3928a3be07e11e6b6a4ea53988a51b550258b0197dc6: Status 404 returned error can't find the container with id 601176355db7bd9f603b3928a3be07e11e6b6a4ea53988a51b550258b0197dc6 Jan 26 19:26:59 crc kubenswrapper[4787]: I0126 19:26:59.353501 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerStarted","Data":"601176355db7bd9f603b3928a3be07e11e6b6a4ea53988a51b550258b0197dc6"} Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.378352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerStarted","Data":"98ecdaa65df4414a3b25228999b43661412c45c4805375ab31b23d9a43c55706"} Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.674824 4787 scope.go:117] "RemoveContainer" containerID="af3f82f5d26fec1d59457d2f442b5aa58197e4b962011cf7f0a18f879befa4c6" Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.718697 4787 scope.go:117] "RemoveContainer" containerID="673db101059efdbcc8c49046a454d4c0c8313ec08e58b651f8b307c03bcd2208" Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.763042 4787 scope.go:117] "RemoveContainer" containerID="b422a913941acedeb5540c3e2b1df7f3b891591f7ede706236a07eb5b16421a3" Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.820722 4787 scope.go:117] "RemoveContainer" containerID="975160d504f09c444df8ae765688296320ddfe408a1eb16a514242c452a9d56a" Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.877881 4787 scope.go:117] "RemoveContainer" containerID="2c01a2f88b0829782d2bb5bbc0b71d60ac0e6eff51bd64d6820b96f04f6110c6" Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.908323 4787 scope.go:117] "RemoveContainer" containerID="ed2d64c16444e50bc17143c948623d6da5ffc32229caec76cc077fe5e3be39ea" Jan 26 19:27:00 crc kubenswrapper[4787]: I0126 19:27:00.994493 4787 scope.go:117] "RemoveContainer" containerID="29ed3d6f056a2fe140b016e801c297532d7fb34644b992d85649fb07018896f6" Jan 26 19:27:01 crc kubenswrapper[4787]: I0126 19:27:01.022005 4787 scope.go:117] "RemoveContainer" containerID="41b301526222a9249c832896347d44b5428a597b4bdaadfb8a9cc0bc8f803910" Jan 26 19:27:01 crc kubenswrapper[4787]: I0126 19:27:01.036875 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ldfbg"] Jan 26 19:27:01 crc kubenswrapper[4787]: I0126 19:27:01.050502 4787 scope.go:117] "RemoveContainer" containerID="2e07edadc2af0d72bfa28d5e9aa03808f0c099f6e9efd32d89e899cb8709ad8d" Jan 26 19:27:01 crc kubenswrapper[4787]: I0126 19:27:01.052119 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ldfbg"] Jan 26 19:27:01 crc kubenswrapper[4787]: I0126 19:27:01.184484 4787 scope.go:117] "RemoveContainer" containerID="e609af0091708393e9957d88a259a3b34c186c03a411a18b6d5e6095cd7740c9" Jan 26 19:27:01 crc kubenswrapper[4787]: I0126 19:27:01.606786 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303" path="/var/lib/kubelet/pods/c1a3bd2a-545a-41a2-b59a-0cb4d9cf5303/volumes" Jan 26 19:27:02 crc kubenswrapper[4787]: I0126 19:27:02.404072 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerStarted","Data":"5ad34f823079cc009fa38293deb891684ab5a58aae819d6877e75cab51a0bb89"} Jan 26 19:27:03 crc kubenswrapper[4787]: I0126 19:27:03.414161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerStarted","Data":"e36ead0e47484664d688bcc1425b351bf809ed0f49058dbd9361ca0e6efb1040"} Jan 26 19:27:05 crc kubenswrapper[4787]: I0126 19:27:05.345773 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 19:27:05 crc kubenswrapper[4787]: I0126 19:27:05.349878 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 19:27:05 crc kubenswrapper[4787]: I0126 19:27:05.436191 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerStarted","Data":"00d07112b9eb3d017d543e54ef50a093729a76d55831afdfa3ff4dbc68cbec97"} Jan 26 19:27:05 crc kubenswrapper[4787]: I0126 19:27:05.436290 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:27:05 crc kubenswrapper[4787]: I0126 19:27:05.437092 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 19:27:05 crc kubenswrapper[4787]: I0126 19:27:05.465197 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.668075289 podStartE2EDuration="7.465175764s" podCreationTimestamp="2026-01-26 19:26:58 +0000 UTC" firstStartedPulling="2026-01-26 19:26:58.939350796 +0000 UTC m=+6187.646486929" lastFinishedPulling="2026-01-26 19:27:04.736451271 +0000 UTC m=+6193.443587404" observedRunningTime="2026-01-26 19:27:05.456110704 +0000 UTC m=+6194.163246847" watchObservedRunningTime="2026-01-26 19:27:05.465175764 +0000 UTC m=+6194.172311897" Jan 26 19:27:06 crc kubenswrapper[4787]: I0126 19:27:06.589336 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:27:06 crc kubenswrapper[4787]: E0126 19:27:06.589902 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.654307 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1544-account-create-update-mbjcv"] Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.656993 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.665758 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.675989 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-xsl7r"] Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.680164 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.714792 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1544-account-create-update-mbjcv"] Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.725296 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xsl7r"] Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.739940 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-operator-scripts\") pod \"aodh-db-create-xsl7r\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.740117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f18e5d1-19e3-4c9e-883a-56e294d936a1-operator-scripts\") pod \"aodh-1544-account-create-update-mbjcv\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.740170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwdx\" (UniqueName: \"kubernetes.io/projected/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-kube-api-access-9kwdx\") pod \"aodh-db-create-xsl7r\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.740424 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxtk\" (UniqueName: \"kubernetes.io/projected/1f18e5d1-19e3-4c9e-883a-56e294d936a1-kube-api-access-wvxtk\") pod \"aodh-1544-account-create-update-mbjcv\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.845345 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-operator-scripts\") pod \"aodh-db-create-xsl7r\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.845515 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f18e5d1-19e3-4c9e-883a-56e294d936a1-operator-scripts\") pod \"aodh-1544-account-create-update-mbjcv\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.845565 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kwdx\" (UniqueName: \"kubernetes.io/projected/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-kube-api-access-9kwdx\") pod \"aodh-db-create-xsl7r\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.845660 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxtk\" (UniqueName: \"kubernetes.io/projected/1f18e5d1-19e3-4c9e-883a-56e294d936a1-kube-api-access-wvxtk\") pod \"aodh-1544-account-create-update-mbjcv\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.846399 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f18e5d1-19e3-4c9e-883a-56e294d936a1-operator-scripts\") pod \"aodh-1544-account-create-update-mbjcv\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.847541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-operator-scripts\") pod \"aodh-db-create-xsl7r\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.877315 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kwdx\" (UniqueName: \"kubernetes.io/projected/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-kube-api-access-9kwdx\") pod \"aodh-db-create-xsl7r\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.877319 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxtk\" (UniqueName: \"kubernetes.io/projected/1f18e5d1-19e3-4c9e-883a-56e294d936a1-kube-api-access-wvxtk\") pod \"aodh-1544-account-create-update-mbjcv\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:10 crc kubenswrapper[4787]: I0126 19:27:10.978059 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:11 crc kubenswrapper[4787]: I0126 19:27:11.002180 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:11 crc kubenswrapper[4787]: I0126 19:27:11.517150 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xsl7r"] Jan 26 19:27:11 crc kubenswrapper[4787]: W0126 19:27:11.531224 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3adf2c19_c9c4_4d1e_a577_b511f4b22e93.slice/crio-774f1e81d6e6b5966502877a48b6d2e457c139645e87a3e2e2df566789fbfffc WatchSource:0}: Error finding container 774f1e81d6e6b5966502877a48b6d2e457c139645e87a3e2e2df566789fbfffc: Status 404 returned error can't find the container with id 774f1e81d6e6b5966502877a48b6d2e457c139645e87a3e2e2df566789fbfffc Jan 26 19:27:11 crc kubenswrapper[4787]: I0126 19:27:11.542771 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1544-account-create-update-mbjcv"] Jan 26 19:27:12 crc kubenswrapper[4787]: I0126 19:27:12.506222 4787 generic.go:334] "Generic (PLEG): container finished" podID="3adf2c19-c9c4-4d1e-a577-b511f4b22e93" containerID="feea5faf4f93742029b88fa6362f9e5fed94a9d677137f62011a5a099bdd4e24" exitCode=0 Jan 26 19:27:12 crc kubenswrapper[4787]: I0126 19:27:12.506288 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xsl7r" event={"ID":"3adf2c19-c9c4-4d1e-a577-b511f4b22e93","Type":"ContainerDied","Data":"feea5faf4f93742029b88fa6362f9e5fed94a9d677137f62011a5a099bdd4e24"} Jan 26 19:27:12 crc kubenswrapper[4787]: I0126 19:27:12.506613 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xsl7r" event={"ID":"3adf2c19-c9c4-4d1e-a577-b511f4b22e93","Type":"ContainerStarted","Data":"774f1e81d6e6b5966502877a48b6d2e457c139645e87a3e2e2df566789fbfffc"} Jan 26 19:27:12 crc kubenswrapper[4787]: I0126 19:27:12.508328 4787 generic.go:334] "Generic (PLEG): container finished" podID="1f18e5d1-19e3-4c9e-883a-56e294d936a1" containerID="d55a50548e656aa9b3e74b427a3adf65e2b4c7b03a76247ea5fdb032eb275d7b" exitCode=0 Jan 26 19:27:12 crc kubenswrapper[4787]: I0126 19:27:12.508366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1544-account-create-update-mbjcv" event={"ID":"1f18e5d1-19e3-4c9e-883a-56e294d936a1","Type":"ContainerDied","Data":"d55a50548e656aa9b3e74b427a3adf65e2b4c7b03a76247ea5fdb032eb275d7b"} Jan 26 19:27:12 crc kubenswrapper[4787]: I0126 19:27:12.508392 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1544-account-create-update-mbjcv" event={"ID":"1f18e5d1-19e3-4c9e-883a-56e294d936a1","Type":"ContainerStarted","Data":"46f66d5b87d6dbd82bd8d2b0c3d66ad001913c6ed134a8afbba9001a9df12a08"} Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:13.974259 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.020162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f18e5d1-19e3-4c9e-883a-56e294d936a1-operator-scripts\") pod \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.020330 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvxtk\" (UniqueName: \"kubernetes.io/projected/1f18e5d1-19e3-4c9e-883a-56e294d936a1-kube-api-access-wvxtk\") pod \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\" (UID: \"1f18e5d1-19e3-4c9e-883a-56e294d936a1\") " Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.020686 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f18e5d1-19e3-4c9e-883a-56e294d936a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f18e5d1-19e3-4c9e-883a-56e294d936a1" (UID: "1f18e5d1-19e3-4c9e-883a-56e294d936a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.020922 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f18e5d1-19e3-4c9e-883a-56e294d936a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.026171 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f18e5d1-19e3-4c9e-883a-56e294d936a1-kube-api-access-wvxtk" (OuterVolumeSpecName: "kube-api-access-wvxtk") pod "1f18e5d1-19e3-4c9e-883a-56e294d936a1" (UID: "1f18e5d1-19e3-4c9e-883a-56e294d936a1"). InnerVolumeSpecName "kube-api-access-wvxtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.122815 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvxtk\" (UniqueName: \"kubernetes.io/projected/1f18e5d1-19e3-4c9e-883a-56e294d936a1-kube-api-access-wvxtk\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.530044 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1544-account-create-update-mbjcv" event={"ID":"1f18e5d1-19e3-4c9e-883a-56e294d936a1","Type":"ContainerDied","Data":"46f66d5b87d6dbd82bd8d2b0c3d66ad001913c6ed134a8afbba9001a9df12a08"} Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.530094 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46f66d5b87d6dbd82bd8d2b0c3d66ad001913c6ed134a8afbba9001a9df12a08" Jan 26 19:27:14 crc kubenswrapper[4787]: I0126 19:27:14.530128 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1544-account-create-update-mbjcv" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.019899 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.052668 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kwdx\" (UniqueName: \"kubernetes.io/projected/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-kube-api-access-9kwdx\") pod \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.053051 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-operator-scripts\") pod \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\" (UID: \"3adf2c19-c9c4-4d1e-a577-b511f4b22e93\") " Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.053866 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3adf2c19-c9c4-4d1e-a577-b511f4b22e93" (UID: "3adf2c19-c9c4-4d1e-a577-b511f4b22e93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.054443 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.074900 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-kube-api-access-9kwdx" (OuterVolumeSpecName: "kube-api-access-9kwdx") pod "3adf2c19-c9c4-4d1e-a577-b511f4b22e93" (UID: "3adf2c19-c9c4-4d1e-a577-b511f4b22e93"). InnerVolumeSpecName "kube-api-access-9kwdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.155818 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kwdx\" (UniqueName: \"kubernetes.io/projected/3adf2c19-c9c4-4d1e-a577-b511f4b22e93-kube-api-access-9kwdx\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.541639 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xsl7r" event={"ID":"3adf2c19-c9c4-4d1e-a577-b511f4b22e93","Type":"ContainerDied","Data":"774f1e81d6e6b5966502877a48b6d2e457c139645e87a3e2e2df566789fbfffc"} Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.541900 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="774f1e81d6e6b5966502877a48b6d2e457c139645e87a3e2e2df566789fbfffc" Jan 26 19:27:15 crc kubenswrapper[4787]: I0126 19:27:15.541688 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xsl7r" Jan 26 19:27:19 crc kubenswrapper[4787]: I0126 19:27:19.590223 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:27:19 crc kubenswrapper[4787]: E0126 19:27:19.593435 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.088098 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-ftc88"] Jan 26 19:27:21 crc kubenswrapper[4787]: E0126 19:27:21.088991 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3adf2c19-c9c4-4d1e-a577-b511f4b22e93" containerName="mariadb-database-create" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.089010 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adf2c19-c9c4-4d1e-a577-b511f4b22e93" containerName="mariadb-database-create" Jan 26 19:27:21 crc kubenswrapper[4787]: E0126 19:27:21.089030 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f18e5d1-19e3-4c9e-883a-56e294d936a1" containerName="mariadb-account-create-update" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.089038 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f18e5d1-19e3-4c9e-883a-56e294d936a1" containerName="mariadb-account-create-update" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.089287 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f18e5d1-19e3-4c9e-883a-56e294d936a1" containerName="mariadb-account-create-update" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.089321 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3adf2c19-c9c4-4d1e-a577-b511f4b22e93" containerName="mariadb-database-create" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.090315 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.092655 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h5sjc" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.092675 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.092971 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.094588 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.105324 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-ftc88"] Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.183399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-config-data\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.183788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwfw\" (UniqueName: \"kubernetes.io/projected/8a2ed31f-039f-490e-8a26-89cd41005449-kube-api-access-9rwfw\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.184001 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-combined-ca-bundle\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.184047 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-scripts\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.286321 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-scripts\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.286401 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-config-data\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.286513 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwfw\" (UniqueName: \"kubernetes.io/projected/8a2ed31f-039f-490e-8a26-89cd41005449-kube-api-access-9rwfw\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.286584 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-combined-ca-bundle\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.293177 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-scripts\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.293359 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-combined-ca-bundle\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.294243 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-config-data\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.305551 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwfw\" (UniqueName: \"kubernetes.io/projected/8a2ed31f-039f-490e-8a26-89cd41005449-kube-api-access-9rwfw\") pod \"aodh-db-sync-ftc88\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.412192 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:21 crc kubenswrapper[4787]: I0126 19:27:21.916282 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-ftc88"] Jan 26 19:27:22 crc kubenswrapper[4787]: I0126 19:27:22.688282 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ftc88" event={"ID":"8a2ed31f-039f-490e-8a26-89cd41005449","Type":"ContainerStarted","Data":"0e71605e1f7fefaf0049d7f81ea0eccdd1be8a649c2310b196c708574a482235"} Jan 26 19:27:27 crc kubenswrapper[4787]: I0126 19:27:27.747429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ftc88" event={"ID":"8a2ed31f-039f-490e-8a26-89cd41005449","Type":"ContainerStarted","Data":"dc29d76d2aa4971b96fb3adbb941397cc4d671a04e13dde280359ba5393b39da"} Jan 26 19:27:27 crc kubenswrapper[4787]: I0126 19:27:27.779267 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-ftc88" podStartSLOduration=2.20659011 podStartE2EDuration="6.779246184s" podCreationTimestamp="2026-01-26 19:27:21 +0000 UTC" firstStartedPulling="2026-01-26 19:27:21.920574152 +0000 UTC m=+6210.627710285" lastFinishedPulling="2026-01-26 19:27:26.493230216 +0000 UTC m=+6215.200366359" observedRunningTime="2026-01-26 19:27:27.765412805 +0000 UTC m=+6216.472548948" watchObservedRunningTime="2026-01-26 19:27:27.779246184 +0000 UTC m=+6216.486382327" Jan 26 19:27:28 crc kubenswrapper[4787]: I0126 19:27:28.427862 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:27:29 crc kubenswrapper[4787]: I0126 19:27:29.767158 4787 generic.go:334] "Generic (PLEG): container finished" podID="8a2ed31f-039f-490e-8a26-89cd41005449" containerID="dc29d76d2aa4971b96fb3adbb941397cc4d671a04e13dde280359ba5393b39da" exitCode=0 Jan 26 19:27:29 crc kubenswrapper[4787]: I0126 19:27:29.767352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ftc88" event={"ID":"8a2ed31f-039f-490e-8a26-89cd41005449","Type":"ContainerDied","Data":"dc29d76d2aa4971b96fb3adbb941397cc4d671a04e13dde280359ba5393b39da"} Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.177053 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.230833 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-config-data\") pod \"8a2ed31f-039f-490e-8a26-89cd41005449\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.230975 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-combined-ca-bundle\") pod \"8a2ed31f-039f-490e-8a26-89cd41005449\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.231211 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-scripts\") pod \"8a2ed31f-039f-490e-8a26-89cd41005449\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.231254 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwfw\" (UniqueName: \"kubernetes.io/projected/8a2ed31f-039f-490e-8a26-89cd41005449-kube-api-access-9rwfw\") pod \"8a2ed31f-039f-490e-8a26-89cd41005449\" (UID: \"8a2ed31f-039f-490e-8a26-89cd41005449\") " Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.238434 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-scripts" (OuterVolumeSpecName: "scripts") pod "8a2ed31f-039f-490e-8a26-89cd41005449" (UID: "8a2ed31f-039f-490e-8a26-89cd41005449"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.238493 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2ed31f-039f-490e-8a26-89cd41005449-kube-api-access-9rwfw" (OuterVolumeSpecName: "kube-api-access-9rwfw") pod "8a2ed31f-039f-490e-8a26-89cd41005449" (UID: "8a2ed31f-039f-490e-8a26-89cd41005449"). InnerVolumeSpecName "kube-api-access-9rwfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.260676 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-config-data" (OuterVolumeSpecName: "config-data") pod "8a2ed31f-039f-490e-8a26-89cd41005449" (UID: "8a2ed31f-039f-490e-8a26-89cd41005449"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.274813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a2ed31f-039f-490e-8a26-89cd41005449" (UID: "8a2ed31f-039f-490e-8a26-89cd41005449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.334064 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.334096 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rwfw\" (UniqueName: \"kubernetes.io/projected/8a2ed31f-039f-490e-8a26-89cd41005449-kube-api-access-9rwfw\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.334116 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.334128 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed31f-039f-490e-8a26-89cd41005449-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.789121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ftc88" event={"ID":"8a2ed31f-039f-490e-8a26-89cd41005449","Type":"ContainerDied","Data":"0e71605e1f7fefaf0049d7f81ea0eccdd1be8a649c2310b196c708574a482235"} Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.789482 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e71605e1f7fefaf0049d7f81ea0eccdd1be8a649c2310b196c708574a482235" Jan 26 19:27:31 crc kubenswrapper[4787]: I0126 19:27:31.789251 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ftc88" Jan 26 19:27:32 crc kubenswrapper[4787]: I0126 19:27:32.589505 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:27:32 crc kubenswrapper[4787]: E0126 19:27:32.589817 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.430811 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 19:27:35 crc kubenswrapper[4787]: E0126 19:27:35.432988 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2ed31f-039f-490e-8a26-89cd41005449" containerName="aodh-db-sync" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.433085 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2ed31f-039f-490e-8a26-89cd41005449" containerName="aodh-db-sync" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.433466 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2ed31f-039f-490e-8a26-89cd41005449" containerName="aodh-db-sync" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.435609 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.441905 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.442123 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.442173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h5sjc" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.442480 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.524577 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.524631 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892wx\" (UniqueName: \"kubernetes.io/projected/fd637985-9b74-4b96-a04f-b197c9264c9b-kube-api-access-892wx\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.524662 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-config-data\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.524804 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-scripts\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.627485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.627541 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-892wx\" (UniqueName: \"kubernetes.io/projected/fd637985-9b74-4b96-a04f-b197c9264c9b-kube-api-access-892wx\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.627609 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-config-data\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.627909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-scripts\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.634448 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-scripts\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.635514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.649353 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-892wx\" (UniqueName: \"kubernetes.io/projected/fd637985-9b74-4b96-a04f-b197c9264c9b-kube-api-access-892wx\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.650383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd637985-9b74-4b96-a04f-b197c9264c9b-config-data\") pod \"aodh-0\" (UID: \"fd637985-9b74-4b96-a04f-b197c9264c9b\") " pod="openstack/aodh-0" Jan 26 19:27:35 crc kubenswrapper[4787]: I0126 19:27:35.766209 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 19:27:36 crc kubenswrapper[4787]: I0126 19:27:36.275821 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 19:27:36 crc kubenswrapper[4787]: I0126 19:27:36.852542 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fd637985-9b74-4b96-a04f-b197c9264c9b","Type":"ContainerStarted","Data":"e1c5e15bfec2031c5e2d468f18c53da51a866359e52535b71fd00822a495e884"} Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.696281 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.696842 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-central-agent" containerID="cri-o://98ecdaa65df4414a3b25228999b43661412c45c4805375ab31b23d9a43c55706" gracePeriod=30 Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.697005 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="proxy-httpd" containerID="cri-o://00d07112b9eb3d017d543e54ef50a093729a76d55831afdfa3ff4dbc68cbec97" gracePeriod=30 Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.697071 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="sg-core" containerID="cri-o://e36ead0e47484664d688bcc1425b351bf809ed0f49058dbd9361ca0e6efb1040" gracePeriod=30 Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.697127 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-notification-agent" containerID="cri-o://5ad34f823079cc009fa38293deb891684ab5a58aae819d6877e75cab51a0bb89" gracePeriod=30 Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.864736 4787 generic.go:334] "Generic (PLEG): container finished" podID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerID="00d07112b9eb3d017d543e54ef50a093729a76d55831afdfa3ff4dbc68cbec97" exitCode=0 Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.864767 4787 generic.go:334] "Generic (PLEG): container finished" podID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerID="e36ead0e47484664d688bcc1425b351bf809ed0f49058dbd9361ca0e6efb1040" exitCode=2 Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.864803 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerDied","Data":"00d07112b9eb3d017d543e54ef50a093729a76d55831afdfa3ff4dbc68cbec97"} Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.864836 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerDied","Data":"e36ead0e47484664d688bcc1425b351bf809ed0f49058dbd9361ca0e6efb1040"} Jan 26 19:27:37 crc kubenswrapper[4787]: I0126 19:27:37.866764 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fd637985-9b74-4b96-a04f-b197c9264c9b","Type":"ContainerStarted","Data":"82f41ca559b9066009f63e0a1f50bed38df31f0391d0d46256d89d904f36db95"} Jan 26 19:27:38 crc kubenswrapper[4787]: I0126 19:27:38.881391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fd637985-9b74-4b96-a04f-b197c9264c9b","Type":"ContainerStarted","Data":"d22ad6dbaf3adb157eee663bf3e0d722581159630719c0851af55fbbd477a0d2"} Jan 26 19:27:38 crc kubenswrapper[4787]: I0126 19:27:38.884401 4787 generic.go:334] "Generic (PLEG): container finished" podID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerID="98ecdaa65df4414a3b25228999b43661412c45c4805375ab31b23d9a43c55706" exitCode=0 Jan 26 19:27:38 crc kubenswrapper[4787]: I0126 19:27:38.884436 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerDied","Data":"98ecdaa65df4414a3b25228999b43661412c45c4805375ab31b23d9a43c55706"} Jan 26 19:27:40 crc kubenswrapper[4787]: I0126 19:27:40.909329 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fd637985-9b74-4b96-a04f-b197c9264c9b","Type":"ContainerStarted","Data":"011212ce3690d83000410894d7d17db07f460d40b314fb22c3697d8ff9981026"} Jan 26 19:27:41 crc kubenswrapper[4787]: I0126 19:27:41.936234 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fd637985-9b74-4b96-a04f-b197c9264c9b","Type":"ContainerStarted","Data":"561fb3809540b28849fd67c1032502b454b28ba587ad34ba0dee5ca050005371"} Jan 26 19:27:44 crc kubenswrapper[4787]: I0126 19:27:44.589646 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:27:44 crc kubenswrapper[4787]: E0126 19:27:44.590437 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.041209 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=4.882415548 podStartE2EDuration="10.041186631s" podCreationTimestamp="2026-01-26 19:27:35 +0000 UTC" firstStartedPulling="2026-01-26 19:27:36.316031403 +0000 UTC m=+6225.023167536" lastFinishedPulling="2026-01-26 19:27:41.474802486 +0000 UTC m=+6230.181938619" observedRunningTime="2026-01-26 19:27:41.976470966 +0000 UTC m=+6230.683607109" watchObservedRunningTime="2026-01-26 19:27:45.041186631 +0000 UTC m=+6233.748322764" Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.043387 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e031-account-create-update-qzx5b"] Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.055380 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2468z"] Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.065107 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e031-account-create-update-qzx5b"] Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.074151 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2468z"] Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.605654 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d0ce5d-8a55-40b3-bca2-ab3439ceec22" path="/var/lib/kubelet/pods/03d0ce5d-8a55-40b3-bca2-ab3439ceec22/volumes" Jan 26 19:27:45 crc kubenswrapper[4787]: I0126 19:27:45.607501 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eebd6fa5-1a6d-4749-ada3-fdd8c774abdc" path="/var/lib/kubelet/pods/eebd6fa5-1a6d-4749-ada3-fdd8c774abdc/volumes" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.006972 4787 generic.go:334] "Generic (PLEG): container finished" podID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerID="5ad34f823079cc009fa38293deb891684ab5a58aae819d6877e75cab51a0bb89" exitCode=0 Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.007317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerDied","Data":"5ad34f823079cc009fa38293deb891684ab5a58aae819d6877e75cab51a0bb89"} Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.007343 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53b238b3-803d-4284-bfa8-c4e7657dfefb","Type":"ContainerDied","Data":"601176355db7bd9f603b3928a3be07e11e6b6a4ea53988a51b550258b0197dc6"} Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.007357 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="601176355db7bd9f603b3928a3be07e11e6b6a4ea53988a51b550258b0197dc6" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.067916 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184567 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-sg-core-conf-yaml\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184674 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-log-httpd\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184763 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mbk5\" (UniqueName: \"kubernetes.io/projected/53b238b3-803d-4284-bfa8-c4e7657dfefb-kube-api-access-4mbk5\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184798 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-config-data\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184816 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-combined-ca-bundle\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184876 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-scripts\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.184896 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-run-httpd\") pod \"53b238b3-803d-4284-bfa8-c4e7657dfefb\" (UID: \"53b238b3-803d-4284-bfa8-c4e7657dfefb\") " Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.185647 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.186131 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.190683 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b238b3-803d-4284-bfa8-c4e7657dfefb-kube-api-access-4mbk5" (OuterVolumeSpecName: "kube-api-access-4mbk5") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "kube-api-access-4mbk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.190831 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-scripts" (OuterVolumeSpecName: "scripts") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.225079 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.286096 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.287562 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.287579 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.287591 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mbk5\" (UniqueName: \"kubernetes.io/projected/53b238b3-803d-4284-bfa8-c4e7657dfefb-kube-api-access-4mbk5\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.287603 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.287613 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.287621 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53b238b3-803d-4284-bfa8-c4e7657dfefb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.304481 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-config-data" (OuterVolumeSpecName: "config-data") pod "53b238b3-803d-4284-bfa8-c4e7657dfefb" (UID: "53b238b3-803d-4284-bfa8-c4e7657dfefb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:46 crc kubenswrapper[4787]: I0126 19:27:46.389880 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53b238b3-803d-4284-bfa8-c4e7657dfefb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.016278 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.050816 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.062300 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.073970 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:27:47 crc kubenswrapper[4787]: E0126 19:27:47.074454 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-notification-agent" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074472 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-notification-agent" Jan 26 19:27:47 crc kubenswrapper[4787]: E0126 19:27:47.074503 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-central-agent" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074511 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-central-agent" Jan 26 19:27:47 crc kubenswrapper[4787]: E0126 19:27:47.074532 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="proxy-httpd" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074540 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="proxy-httpd" Jan 26 19:27:47 crc kubenswrapper[4787]: E0126 19:27:47.074558 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="sg-core" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074565 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="sg-core" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074790 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-central-agent" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074811 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="proxy-httpd" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074837 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="sg-core" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.074859 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" containerName="ceilometer-notification-agent" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.077488 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.089770 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.090123 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.117769 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-scripts\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213357 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213478 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213518 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213556 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-config-data\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213684 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7kz\" (UniqueName: \"kubernetes.io/projected/ce9ead37-b085-4ea7-aa36-65471119a627-kube-api-access-ng7kz\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.213730 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.318136 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.318417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.318498 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.318595 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-config-data\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.318710 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng7kz\" (UniqueName: \"kubernetes.io/projected/ce9ead37-b085-4ea7-aa36-65471119a627-kube-api-access-ng7kz\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.318815 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.319026 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-scripts\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.320010 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.320373 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.324300 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.324789 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.331717 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-config-data\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.333915 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-scripts\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.337110 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng7kz\" (UniqueName: \"kubernetes.io/projected/ce9ead37-b085-4ea7-aa36-65471119a627-kube-api-access-ng7kz\") pod \"ceilometer-0\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.426646 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.603747 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b238b3-803d-4284-bfa8-c4e7657dfefb" path="/var/lib/kubelet/pods/53b238b3-803d-4284-bfa8-c4e7657dfefb/volumes" Jan 26 19:27:47 crc kubenswrapper[4787]: I0126 19:27:47.976329 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:27:48 crc kubenswrapper[4787]: I0126 19:27:48.027319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerStarted","Data":"18d049f7ae44d620d0607c0b87a075cdfaed5f66299c05fd7dcdbbd8d7b63235"} Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.042375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerStarted","Data":"62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9"} Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.124291 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-gt2wn"] Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.126254 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.141531 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-gt2wn"] Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.230901 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-3fc2-account-create-update-5qwlg"] Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.232343 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.238915 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.245690 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3fc2-account-create-update-5qwlg"] Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.264048 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/665dec65-293e-4768-9b29-4fd9ace8bd56-operator-scripts\") pod \"manila-db-create-gt2wn\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.264198 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qg4h\" (UniqueName: \"kubernetes.io/projected/665dec65-293e-4768-9b29-4fd9ace8bd56-kube-api-access-8qg4h\") pod \"manila-db-create-gt2wn\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.367351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e300fc33-07d6-49b7-8549-7ac0fd513183-operator-scripts\") pod \"manila-3fc2-account-create-update-5qwlg\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.367558 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/665dec65-293e-4768-9b29-4fd9ace8bd56-operator-scripts\") pod \"manila-db-create-gt2wn\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.367610 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qg4h\" (UniqueName: \"kubernetes.io/projected/665dec65-293e-4768-9b29-4fd9ace8bd56-kube-api-access-8qg4h\") pod \"manila-db-create-gt2wn\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.367665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp6sj\" (UniqueName: \"kubernetes.io/projected/e300fc33-07d6-49b7-8549-7ac0fd513183-kube-api-access-qp6sj\") pod \"manila-3fc2-account-create-update-5qwlg\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.368653 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/665dec65-293e-4768-9b29-4fd9ace8bd56-operator-scripts\") pod \"manila-db-create-gt2wn\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.386140 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qg4h\" (UniqueName: \"kubernetes.io/projected/665dec65-293e-4768-9b29-4fd9ace8bd56-kube-api-access-8qg4h\") pod \"manila-db-create-gt2wn\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.469129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e300fc33-07d6-49b7-8549-7ac0fd513183-operator-scripts\") pod \"manila-3fc2-account-create-update-5qwlg\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.469560 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp6sj\" (UniqueName: \"kubernetes.io/projected/e300fc33-07d6-49b7-8549-7ac0fd513183-kube-api-access-qp6sj\") pod \"manila-3fc2-account-create-update-5qwlg\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.470719 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e300fc33-07d6-49b7-8549-7ac0fd513183-operator-scripts\") pod \"manila-3fc2-account-create-update-5qwlg\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.490021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp6sj\" (UniqueName: \"kubernetes.io/projected/e300fc33-07d6-49b7-8549-7ac0fd513183-kube-api-access-qp6sj\") pod \"manila-3fc2-account-create-update-5qwlg\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.517665 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:49 crc kubenswrapper[4787]: I0126 19:27:49.566875 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:50 crc kubenswrapper[4787]: I0126 19:27:50.097293 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-gt2wn"] Jan 26 19:27:50 crc kubenswrapper[4787]: I0126 19:27:50.101901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerStarted","Data":"c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844"} Jan 26 19:27:50 crc kubenswrapper[4787]: I0126 19:27:50.102005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerStarted","Data":"8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0"} Jan 26 19:27:50 crc kubenswrapper[4787]: W0126 19:27:50.102278 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod665dec65_293e_4768_9b29_4fd9ace8bd56.slice/crio-c2e97156e04babd8234279645ffaec682263b2c136570fce7db5f042e43be7d4 WatchSource:0}: Error finding container c2e97156e04babd8234279645ffaec682263b2c136570fce7db5f042e43be7d4: Status 404 returned error can't find the container with id c2e97156e04babd8234279645ffaec682263b2c136570fce7db5f042e43be7d4 Jan 26 19:27:50 crc kubenswrapper[4787]: I0126 19:27:50.216017 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3fc2-account-create-update-5qwlg"] Jan 26 19:27:51 crc kubenswrapper[4787]: I0126 19:27:51.111323 4787 generic.go:334] "Generic (PLEG): container finished" podID="665dec65-293e-4768-9b29-4fd9ace8bd56" containerID="7907e97ba640a87b556e93bf750a7283943fe08d8df2b82639a90b21cdeb902c" exitCode=0 Jan 26 19:27:51 crc kubenswrapper[4787]: I0126 19:27:51.111370 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-gt2wn" event={"ID":"665dec65-293e-4768-9b29-4fd9ace8bd56","Type":"ContainerDied","Data":"7907e97ba640a87b556e93bf750a7283943fe08d8df2b82639a90b21cdeb902c"} Jan 26 19:27:51 crc kubenswrapper[4787]: I0126 19:27:51.111652 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-gt2wn" event={"ID":"665dec65-293e-4768-9b29-4fd9ace8bd56","Type":"ContainerStarted","Data":"c2e97156e04babd8234279645ffaec682263b2c136570fce7db5f042e43be7d4"} Jan 26 19:27:51 crc kubenswrapper[4787]: I0126 19:27:51.114023 4787 generic.go:334] "Generic (PLEG): container finished" podID="e300fc33-07d6-49b7-8549-7ac0fd513183" containerID="9a8ddf99d25ee0a60821b6bcb54621cf39f8fc8fe119c0f51d784e5e4ecfed0c" exitCode=0 Jan 26 19:27:51 crc kubenswrapper[4787]: I0126 19:27:51.114065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3fc2-account-create-update-5qwlg" event={"ID":"e300fc33-07d6-49b7-8549-7ac0fd513183","Type":"ContainerDied","Data":"9a8ddf99d25ee0a60821b6bcb54621cf39f8fc8fe119c0f51d784e5e4ecfed0c"} Jan 26 19:27:51 crc kubenswrapper[4787]: I0126 19:27:51.114104 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3fc2-account-create-update-5qwlg" event={"ID":"e300fc33-07d6-49b7-8549-7ac0fd513183","Type":"ContainerStarted","Data":"0fc5934b45bf50741dda9c9b8eb834be784dbd1bec7da0075a3c02703399f248"} Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.126429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerStarted","Data":"8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1"} Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.154800 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.83499518 podStartE2EDuration="5.154778756s" podCreationTimestamp="2026-01-26 19:27:47 +0000 UTC" firstStartedPulling="2026-01-26 19:27:47.977980089 +0000 UTC m=+6236.685116222" lastFinishedPulling="2026-01-26 19:27:51.297763665 +0000 UTC m=+6240.004899798" observedRunningTime="2026-01-26 19:27:52.144040202 +0000 UTC m=+6240.851176345" watchObservedRunningTime="2026-01-26 19:27:52.154778756 +0000 UTC m=+6240.861914889" Jan 26 19:27:52 crc kubenswrapper[4787]: E0126 19:27:52.376380 4787 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.621684 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.630999 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.765135 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp6sj\" (UniqueName: \"kubernetes.io/projected/e300fc33-07d6-49b7-8549-7ac0fd513183-kube-api-access-qp6sj\") pod \"e300fc33-07d6-49b7-8549-7ac0fd513183\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.765214 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/665dec65-293e-4768-9b29-4fd9ace8bd56-operator-scripts\") pod \"665dec65-293e-4768-9b29-4fd9ace8bd56\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.765248 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e300fc33-07d6-49b7-8549-7ac0fd513183-operator-scripts\") pod \"e300fc33-07d6-49b7-8549-7ac0fd513183\" (UID: \"e300fc33-07d6-49b7-8549-7ac0fd513183\") " Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.765308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qg4h\" (UniqueName: \"kubernetes.io/projected/665dec65-293e-4768-9b29-4fd9ace8bd56-kube-api-access-8qg4h\") pod \"665dec65-293e-4768-9b29-4fd9ace8bd56\" (UID: \"665dec65-293e-4768-9b29-4fd9ace8bd56\") " Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.765819 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e300fc33-07d6-49b7-8549-7ac0fd513183-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e300fc33-07d6-49b7-8549-7ac0fd513183" (UID: "e300fc33-07d6-49b7-8549-7ac0fd513183"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.765819 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665dec65-293e-4768-9b29-4fd9ace8bd56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "665dec65-293e-4768-9b29-4fd9ace8bd56" (UID: "665dec65-293e-4768-9b29-4fd9ace8bd56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.766113 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/665dec65-293e-4768-9b29-4fd9ace8bd56-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.766141 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e300fc33-07d6-49b7-8549-7ac0fd513183-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.771631 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665dec65-293e-4768-9b29-4fd9ace8bd56-kube-api-access-8qg4h" (OuterVolumeSpecName: "kube-api-access-8qg4h") pod "665dec65-293e-4768-9b29-4fd9ace8bd56" (UID: "665dec65-293e-4768-9b29-4fd9ace8bd56"). InnerVolumeSpecName "kube-api-access-8qg4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.772034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e300fc33-07d6-49b7-8549-7ac0fd513183-kube-api-access-qp6sj" (OuterVolumeSpecName: "kube-api-access-qp6sj") pod "e300fc33-07d6-49b7-8549-7ac0fd513183" (UID: "e300fc33-07d6-49b7-8549-7ac0fd513183"). InnerVolumeSpecName "kube-api-access-qp6sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.868097 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp6sj\" (UniqueName: \"kubernetes.io/projected/e300fc33-07d6-49b7-8549-7ac0fd513183-kube-api-access-qp6sj\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:52 crc kubenswrapper[4787]: I0126 19:27:52.868359 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qg4h\" (UniqueName: \"kubernetes.io/projected/665dec65-293e-4768-9b29-4fd9ace8bd56-kube-api-access-8qg4h\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.032718 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7589r"] Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.045222 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7589r"] Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.137910 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-gt2wn" event={"ID":"665dec65-293e-4768-9b29-4fd9ace8bd56","Type":"ContainerDied","Data":"c2e97156e04babd8234279645ffaec682263b2c136570fce7db5f042e43be7d4"} Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.137989 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2e97156e04babd8234279645ffaec682263b2c136570fce7db5f042e43be7d4" Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.138039 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gt2wn" Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.141402 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3fc2-account-create-update-5qwlg" event={"ID":"e300fc33-07d6-49b7-8549-7ac0fd513183","Type":"ContainerDied","Data":"0fc5934b45bf50741dda9c9b8eb834be784dbd1bec7da0075a3c02703399f248"} Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.141439 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fc5934b45bf50741dda9c9b8eb834be784dbd1bec7da0075a3c02703399f248" Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.141581 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3fc2-account-create-update-5qwlg" Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.141728 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:27:53 crc kubenswrapper[4787]: I0126 19:27:53.604313 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582540f4-da0d-43fa-9bb8-fe6642cd5af0" path="/var/lib/kubelet/pods/582540f4-da0d-43fa-9bb8-fe6642cd5af0/volumes" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.541745 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-f8b8t"] Jan 26 19:27:54 crc kubenswrapper[4787]: E0126 19:27:54.542364 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300fc33-07d6-49b7-8549-7ac0fd513183" containerName="mariadb-account-create-update" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.542376 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300fc33-07d6-49b7-8549-7ac0fd513183" containerName="mariadb-account-create-update" Jan 26 19:27:54 crc kubenswrapper[4787]: E0126 19:27:54.542408 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665dec65-293e-4768-9b29-4fd9ace8bd56" containerName="mariadb-database-create" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.542414 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="665dec65-293e-4768-9b29-4fd9ace8bd56" containerName="mariadb-database-create" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.542586 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300fc33-07d6-49b7-8549-7ac0fd513183" containerName="mariadb-account-create-update" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.542607 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="665dec65-293e-4768-9b29-4fd9ace8bd56" containerName="mariadb-database-create" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.543328 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.545364 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.545789 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-5d6j9" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.556260 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-f8b8t"] Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.710637 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-config-data\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.710790 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-job-config-data\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.710823 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfkf\" (UniqueName: \"kubernetes.io/projected/490efd8f-2084-4275-96a4-8d458e201ed1-kube-api-access-5wfkf\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.710928 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-combined-ca-bundle\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.812649 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-combined-ca-bundle\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.812735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-config-data\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.812814 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-job-config-data\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.812840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfkf\" (UniqueName: \"kubernetes.io/projected/490efd8f-2084-4275-96a4-8d458e201ed1-kube-api-access-5wfkf\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.818557 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-job-config-data\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.818649 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-config-data\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.819604 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-combined-ca-bundle\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.838690 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfkf\" (UniqueName: \"kubernetes.io/projected/490efd8f-2084-4275-96a4-8d458e201ed1-kube-api-access-5wfkf\") pod \"manila-db-sync-f8b8t\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:54 crc kubenswrapper[4787]: I0126 19:27:54.903767 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-f8b8t" Jan 26 19:27:55 crc kubenswrapper[4787]: I0126 19:27:55.593175 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:27:55 crc kubenswrapper[4787]: E0126 19:27:55.593837 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:27:55 crc kubenswrapper[4787]: I0126 19:27:55.779792 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-f8b8t"] Jan 26 19:27:55 crc kubenswrapper[4787]: W0126 19:27:55.783805 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod490efd8f_2084_4275_96a4_8d458e201ed1.slice/crio-d1c84f0f3de2cc3083cc059fb8aa8262cb8dca8ef63e86a3e7e5e8669874e7dd WatchSource:0}: Error finding container d1c84f0f3de2cc3083cc059fb8aa8262cb8dca8ef63e86a3e7e5e8669874e7dd: Status 404 returned error can't find the container with id d1c84f0f3de2cc3083cc059fb8aa8262cb8dca8ef63e86a3e7e5e8669874e7dd Jan 26 19:27:56 crc kubenswrapper[4787]: I0126 19:27:56.166694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-f8b8t" event={"ID":"490efd8f-2084-4275-96a4-8d458e201ed1","Type":"ContainerStarted","Data":"d1c84f0f3de2cc3083cc059fb8aa8262cb8dca8ef63e86a3e7e5e8669874e7dd"} Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.214598 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-f8b8t" event={"ID":"490efd8f-2084-4275-96a4-8d458e201ed1","Type":"ContainerStarted","Data":"ca07cc9ef2af9c6e98ff72a9fa3d82c44436d8df3fe0221a4f74777bd795b99a"} Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.247140 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-f8b8t" podStartSLOduration=2.908449508 podStartE2EDuration="7.247117545s" podCreationTimestamp="2026-01-26 19:27:54 +0000 UTC" firstStartedPulling="2026-01-26 19:27:55.786378799 +0000 UTC m=+6244.493514932" lastFinishedPulling="2026-01-26 19:28:00.125046846 +0000 UTC m=+6248.832182969" observedRunningTime="2026-01-26 19:28:01.23347871 +0000 UTC m=+6249.940614873" watchObservedRunningTime="2026-01-26 19:28:01.247117545 +0000 UTC m=+6249.954253678" Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.447884 4787 scope.go:117] "RemoveContainer" containerID="2bd48796434870848e64f1b66dd8bc1346dfc073af5829f5833f29a52641af5e" Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.475555 4787 scope.go:117] "RemoveContainer" containerID="0617d01c395c7926041a0740b485dcee446a92d03ee4ec34a7139e80c333a264" Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.517996 4787 scope.go:117] "RemoveContainer" containerID="fb9bc80d9bd68694787d1ee24108afbec0ec732b6fc92b60dc782228cddf42ff" Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.571277 4787 scope.go:117] "RemoveContainer" containerID="b9642c5edb6006ef9a0d56f60ae9debec0adbaed4ea73a5e8b2b1f0a66560c59" Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.630435 4787 scope.go:117] "RemoveContainer" containerID="ef6b0bb4c6e58bcc60675934e65b14d02274978fb788762733a7642fa23aca95" Jan 26 19:28:01 crc kubenswrapper[4787]: I0126 19:28:01.680085 4787 scope.go:117] "RemoveContainer" containerID="37825d6251dc5111aad182e19f10adab3b49f46788c07fd612dc2da6c73d6bf5" Jan 26 19:28:03 crc kubenswrapper[4787]: I0126 19:28:03.235906 4787 generic.go:334] "Generic (PLEG): container finished" podID="490efd8f-2084-4275-96a4-8d458e201ed1" containerID="ca07cc9ef2af9c6e98ff72a9fa3d82c44436d8df3fe0221a4f74777bd795b99a" exitCode=0 Jan 26 19:28:03 crc kubenswrapper[4787]: I0126 19:28:03.235958 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-f8b8t" event={"ID":"490efd8f-2084-4275-96a4-8d458e201ed1","Type":"ContainerDied","Data":"ca07cc9ef2af9c6e98ff72a9fa3d82c44436d8df3fe0221a4f74777bd795b99a"} Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.738979 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-f8b8t" Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.940245 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-combined-ca-bundle\") pod \"490efd8f-2084-4275-96a4-8d458e201ed1\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.940437 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wfkf\" (UniqueName: \"kubernetes.io/projected/490efd8f-2084-4275-96a4-8d458e201ed1-kube-api-access-5wfkf\") pod \"490efd8f-2084-4275-96a4-8d458e201ed1\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.940514 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-config-data\") pod \"490efd8f-2084-4275-96a4-8d458e201ed1\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.940602 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-job-config-data\") pod \"490efd8f-2084-4275-96a4-8d458e201ed1\" (UID: \"490efd8f-2084-4275-96a4-8d458e201ed1\") " Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.945738 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "490efd8f-2084-4275-96a4-8d458e201ed1" (UID: "490efd8f-2084-4275-96a4-8d458e201ed1"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.946417 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490efd8f-2084-4275-96a4-8d458e201ed1-kube-api-access-5wfkf" (OuterVolumeSpecName: "kube-api-access-5wfkf") pod "490efd8f-2084-4275-96a4-8d458e201ed1" (UID: "490efd8f-2084-4275-96a4-8d458e201ed1"). InnerVolumeSpecName "kube-api-access-5wfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.949282 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-config-data" (OuterVolumeSpecName: "config-data") pod "490efd8f-2084-4275-96a4-8d458e201ed1" (UID: "490efd8f-2084-4275-96a4-8d458e201ed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:04 crc kubenswrapper[4787]: I0126 19:28:04.982361 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "490efd8f-2084-4275-96a4-8d458e201ed1" (UID: "490efd8f-2084-4275-96a4-8d458e201ed1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.042785 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wfkf\" (UniqueName: \"kubernetes.io/projected/490efd8f-2084-4275-96a4-8d458e201ed1-kube-api-access-5wfkf\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.042825 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.042837 4787 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.042851 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490efd8f-2084-4275-96a4-8d458e201ed1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.262718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-f8b8t" event={"ID":"490efd8f-2084-4275-96a4-8d458e201ed1","Type":"ContainerDied","Data":"d1c84f0f3de2cc3083cc059fb8aa8262cb8dca8ef63e86a3e7e5e8669874e7dd"} Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.262760 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c84f0f3de2cc3083cc059fb8aa8262cb8dca8ef63e86a3e7e5e8669874e7dd" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.262819 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-f8b8t" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.638667 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 19:28:05 crc kubenswrapper[4787]: E0126 19:28:05.639526 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490efd8f-2084-4275-96a4-8d458e201ed1" containerName="manila-db-sync" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.639552 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="490efd8f-2084-4275-96a4-8d458e201ed1" containerName="manila-db-sync" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.639811 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="490efd8f-2084-4275-96a4-8d458e201ed1" containerName="manila-db-sync" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.641261 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.643345 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.643447 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-5d6j9" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.643761 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.647653 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.653565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbjtw\" (UniqueName: \"kubernetes.io/projected/f49effb7-5427-4a7b-ba48-2137cdcddbe8-kube-api-access-bbjtw\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.653713 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-config-data\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.653778 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.653806 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f49effb7-5427-4a7b-ba48-2137cdcddbe8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.654186 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.654274 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-scripts\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.660628 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.662518 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.664649 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.683669 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.727373 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756367 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756422 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-ceph\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxdh\" (UniqueName: \"kubernetes.io/projected/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-kube-api-access-zvxdh\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756526 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-config-data\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756575 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756602 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.756629 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f49effb7-5427-4a7b-ba48-2137cdcddbe8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757228 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757269 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-scripts\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757307 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-scripts\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757422 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-config-data\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.757450 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbjtw\" (UniqueName: \"kubernetes.io/projected/f49effb7-5427-4a7b-ba48-2137cdcddbe8-kube-api-access-bbjtw\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.759847 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f49effb7-5427-4a7b-ba48-2137cdcddbe8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.765439 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-config-data\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.768927 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.772391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-scripts\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.778207 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49effb7-5427-4a7b-ba48-2137cdcddbe8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.779704 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65df876645-cqjbm"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.785335 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbjtw\" (UniqueName: \"kubernetes.io/projected/f49effb7-5427-4a7b-ba48-2137cdcddbe8-kube-api-access-bbjtw\") pod \"manila-scheduler-0\" (UID: \"f49effb7-5427-4a7b-ba48-2137cdcddbe8\") " pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.805071 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65df876645-cqjbm"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.805187 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-scripts\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859705 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-config-data\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859746 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-ceph\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859792 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxdh\" (UniqueName: \"kubernetes.io/projected/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-kube-api-access-zvxdh\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859788 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.859968 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.864526 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.864884 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-config-data\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.865385 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.867799 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-ceph\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.870650 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-scripts\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.882464 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxdh\" (UniqueName: \"kubernetes.io/projected/e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a-kube-api-access-zvxdh\") pod \"manila-share-share1-0\" (UID: \"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a\") " pod="openstack/manila-share-share1-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.940530 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.942375 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.944871 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.951530 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.961314 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-config\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.961354 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-nb\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.963571 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7rfl\" (UniqueName: \"kubernetes.io/projected/53889309-1901-433a-8ba7-daa0657ca258-kube-api-access-h7rfl\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.963603 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-dns-svc\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.963625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-sb\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.969258 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 19:28:05 crc kubenswrapper[4787]: I0126 19:28:05.979632 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.066699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7rfl\" (UniqueName: \"kubernetes.io/projected/53889309-1901-433a-8ba7-daa0657ca258-kube-api-access-h7rfl\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-dns-svc\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-sb\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067228 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-config-data-custom\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067270 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-config-data\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067331 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-scripts\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067468 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71cae724-1a8b-41b8-a3a9-eb3e70af9024-etc-machine-id\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067570 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71cae724-1a8b-41b8-a3a9-eb3e70af9024-logs\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067711 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbd6\" (UniqueName: \"kubernetes.io/projected/71cae724-1a8b-41b8-a3a9-eb3e70af9024-kube-api-access-gpbd6\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067800 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067878 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-config\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.067908 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-nb\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.068199 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-dns-svc\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.068304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-sb\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.069057 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-config\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.069086 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-nb\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.090843 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7rfl\" (UniqueName: \"kubernetes.io/projected/53889309-1901-433a-8ba7-daa0657ca258-kube-api-access-h7rfl\") pod \"dnsmasq-dns-65df876645-cqjbm\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170085 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-config-data-custom\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-config-data\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-scripts\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170254 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71cae724-1a8b-41b8-a3a9-eb3e70af9024-etc-machine-id\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170281 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71cae724-1a8b-41b8-a3a9-eb3e70af9024-logs\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170332 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbd6\" (UniqueName: \"kubernetes.io/projected/71cae724-1a8b-41b8-a3a9-eb3e70af9024-kube-api-access-gpbd6\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.170372 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.171565 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71cae724-1a8b-41b8-a3a9-eb3e70af9024-logs\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.171649 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71cae724-1a8b-41b8-a3a9-eb3e70af9024-etc-machine-id\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.174693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-scripts\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.175167 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-config-data\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.175669 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-config-data-custom\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.176283 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cae724-1a8b-41b8-a3a9-eb3e70af9024-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.197963 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbd6\" (UniqueName: \"kubernetes.io/projected/71cae724-1a8b-41b8-a3a9-eb3e70af9024-kube-api-access-gpbd6\") pod \"manila-api-0\" (UID: \"71cae724-1a8b-41b8-a3a9-eb3e70af9024\") " pod="openstack/manila-api-0" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.284768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:06 crc kubenswrapper[4787]: I0126 19:28:06.295681 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 19:28:07 crc kubenswrapper[4787]: I0126 19:28:07.355903 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65df876645-cqjbm"] Jan 26 19:28:07 crc kubenswrapper[4787]: I0126 19:28:07.366566 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 19:28:07 crc kubenswrapper[4787]: I0126 19:28:07.438846 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 19:28:07 crc kubenswrapper[4787]: I0126 19:28:07.663833 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.321755 4787 generic.go:334] "Generic (PLEG): container finished" podID="53889309-1901-433a-8ba7-daa0657ca258" containerID="88794d9ce369f495b7259b778fe2f30e0d3abf4ba9e552456a92786cf50e2601" exitCode=0 Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.322140 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65df876645-cqjbm" event={"ID":"53889309-1901-433a-8ba7-daa0657ca258","Type":"ContainerDied","Data":"88794d9ce369f495b7259b778fe2f30e0d3abf4ba9e552456a92786cf50e2601"} Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.322173 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65df876645-cqjbm" event={"ID":"53889309-1901-433a-8ba7-daa0657ca258","Type":"ContainerStarted","Data":"d90489d497ba1ef9b3fc1f3172722bc8244a0a7e5721856cbad721f7a624a1a3"} Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.332928 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a","Type":"ContainerStarted","Data":"ae997235797b6e3be133c0953ccf0c752e6ee734d3c521f92bf6decdf6d1114e"} Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.339786 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"71cae724-1a8b-41b8-a3a9-eb3e70af9024","Type":"ContainerStarted","Data":"42a47c2090f75ee9ab93eca361d3ea194edc286d45d2dacb3772d55923e7c286"} Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.339848 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"71cae724-1a8b-41b8-a3a9-eb3e70af9024","Type":"ContainerStarted","Data":"5d6df44a31d6de47c0725835437983ea1b6092a37da16b0217a14b15cf36bb6f"} Jan 26 19:28:08 crc kubenswrapper[4787]: I0126 19:28:08.341704 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f49effb7-5427-4a7b-ba48-2137cdcddbe8","Type":"ContainerStarted","Data":"7997730bc48c2abb41be5fe672e21892b272e98bd8a747bc9f85309f6d67d0fb"} Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.363935 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"71cae724-1a8b-41b8-a3a9-eb3e70af9024","Type":"ContainerStarted","Data":"654dc99ad237e6b83d6116fa109871741d4bfedf3494e95e6007c91a88196fa3"} Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.364595 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.374431 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f49effb7-5427-4a7b-ba48-2137cdcddbe8","Type":"ContainerStarted","Data":"599a3064525c8f6dac22b48e4951b607dbdaf0487d4aa2da2e8c09ade299fb93"} Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.378092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65df876645-cqjbm" event={"ID":"53889309-1901-433a-8ba7-daa0657ca258","Type":"ContainerStarted","Data":"0e6da2269b17e0b8691363e24bfb1eeeb3fcc44ba0384f0ed092ecb370eb4623"} Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.379212 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.400348 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.4003291 podStartE2EDuration="4.4003291s" podCreationTimestamp="2026-01-26 19:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:28:09.394394475 +0000 UTC m=+6258.101530618" watchObservedRunningTime="2026-01-26 19:28:09.4003291 +0000 UTC m=+6258.107465223" Jan 26 19:28:09 crc kubenswrapper[4787]: I0126 19:28:09.422749 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65df876645-cqjbm" podStartSLOduration=4.422731859 podStartE2EDuration="4.422731859s" podCreationTimestamp="2026-01-26 19:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:28:09.413465903 +0000 UTC m=+6258.120602056" watchObservedRunningTime="2026-01-26 19:28:09.422731859 +0000 UTC m=+6258.129867992" Jan 26 19:28:10 crc kubenswrapper[4787]: I0126 19:28:10.392846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f49effb7-5427-4a7b-ba48-2137cdcddbe8","Type":"ContainerStarted","Data":"f752708425005e6650f06bdc0fa40efd725feea3be49940a0c624303e7fab4ce"} Jan 26 19:28:10 crc kubenswrapper[4787]: I0126 19:28:10.415257 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.759386663 podStartE2EDuration="5.415239232s" podCreationTimestamp="2026-01-26 19:28:05 +0000 UTC" firstStartedPulling="2026-01-26 19:28:07.377721154 +0000 UTC m=+6256.084857287" lastFinishedPulling="2026-01-26 19:28:08.033573723 +0000 UTC m=+6256.740709856" observedRunningTime="2026-01-26 19:28:10.40943185 +0000 UTC m=+6259.116567983" watchObservedRunningTime="2026-01-26 19:28:10.415239232 +0000 UTC m=+6259.122375365" Jan 26 19:28:10 crc kubenswrapper[4787]: I0126 19:28:10.589205 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:28:10 crc kubenswrapper[4787]: E0126 19:28:10.589485 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:28:15 crc kubenswrapper[4787]: I0126 19:28:15.478206 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a","Type":"ContainerStarted","Data":"9e297d4d80fd79efec2433da90d04bace9c1f0fd70a46b788776d285d32097c3"} Jan 26 19:28:15 crc kubenswrapper[4787]: I0126 19:28:15.969817 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 26 19:28:16 crc kubenswrapper[4787]: I0126 19:28:16.290177 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:28:16 crc kubenswrapper[4787]: I0126 19:28:16.364055 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8944c79c-64xh4"] Jan 26 19:28:16 crc kubenswrapper[4787]: I0126 19:28:16.364282 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerName="dnsmasq-dns" containerID="cri-o://c2bc914e756812c196e27bc416cd33fd1c4af78ef27483bbc29cb6cdedd34e6b" gracePeriod=10 Jan 26 19:28:16 crc kubenswrapper[4787]: I0126 19:28:16.497086 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a","Type":"ContainerStarted","Data":"81e0dcbaf3b07077482195785d6185e690267440d9a5d2b2bade419e2f06a960"} Jan 26 19:28:16 crc kubenswrapper[4787]: I0126 19:28:16.529026 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.587491699 podStartE2EDuration="11.529000968s" podCreationTimestamp="2026-01-26 19:28:05 +0000 UTC" firstStartedPulling="2026-01-26 19:28:07.665454338 +0000 UTC m=+6256.372590471" lastFinishedPulling="2026-01-26 19:28:14.606963607 +0000 UTC m=+6263.314099740" observedRunningTime="2026-01-26 19:28:16.52131617 +0000 UTC m=+6265.228452303" watchObservedRunningTime="2026-01-26 19:28:16.529000968 +0000 UTC m=+6265.236137101" Jan 26 19:28:17 crc kubenswrapper[4787]: I0126 19:28:17.508505 4787 generic.go:334] "Generic (PLEG): container finished" podID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerID="c2bc914e756812c196e27bc416cd33fd1c4af78ef27483bbc29cb6cdedd34e6b" exitCode=0 Jan 26 19:28:17 crc kubenswrapper[4787]: I0126 19:28:17.508573 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" event={"ID":"93e77987-fd7e-41c8-af53-afbbc13b6f5b","Type":"ContainerDied","Data":"c2bc914e756812c196e27bc416cd33fd1c4af78ef27483bbc29cb6cdedd34e6b"} Jan 26 19:28:17 crc kubenswrapper[4787]: I0126 19:28:17.545036 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.256157 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.378717 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-config\") pod \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.378816 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-sb\") pod \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.378852 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-dns-svc\") pod \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.378994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-nb\") pod \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.379027 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v44h\" (UniqueName: \"kubernetes.io/projected/93e77987-fd7e-41c8-af53-afbbc13b6f5b-kube-api-access-5v44h\") pod \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\" (UID: \"93e77987-fd7e-41c8-af53-afbbc13b6f5b\") " Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.394379 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e77987-fd7e-41c8-af53-afbbc13b6f5b-kube-api-access-5v44h" (OuterVolumeSpecName: "kube-api-access-5v44h") pod "93e77987-fd7e-41c8-af53-afbbc13b6f5b" (UID: "93e77987-fd7e-41c8-af53-afbbc13b6f5b"). InnerVolumeSpecName "kube-api-access-5v44h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.452017 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-config" (OuterVolumeSpecName: "config") pod "93e77987-fd7e-41c8-af53-afbbc13b6f5b" (UID: "93e77987-fd7e-41c8-af53-afbbc13b6f5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.455891 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93e77987-fd7e-41c8-af53-afbbc13b6f5b" (UID: "93e77987-fd7e-41c8-af53-afbbc13b6f5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.458616 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93e77987-fd7e-41c8-af53-afbbc13b6f5b" (UID: "93e77987-fd7e-41c8-af53-afbbc13b6f5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.468716 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93e77987-fd7e-41c8-af53-afbbc13b6f5b" (UID: "93e77987-fd7e-41c8-af53-afbbc13b6f5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.481971 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.482009 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v44h\" (UniqueName: \"kubernetes.io/projected/93e77987-fd7e-41c8-af53-afbbc13b6f5b-kube-api-access-5v44h\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.482021 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.482030 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.482038 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93e77987-fd7e-41c8-af53-afbbc13b6f5b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.519692 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" event={"ID":"93e77987-fd7e-41c8-af53-afbbc13b6f5b","Type":"ContainerDied","Data":"406447dd4d1cecb00d552ca39893639ddf4ddb1ee4961891d605f09496511cee"} Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.519766 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8944c79c-64xh4" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.519775 4787 scope.go:117] "RemoveContainer" containerID="c2bc914e756812c196e27bc416cd33fd1c4af78ef27483bbc29cb6cdedd34e6b" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.581701 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8944c79c-64xh4"] Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.595381 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c8944c79c-64xh4"] Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.617296 4787 scope.go:117] "RemoveContainer" containerID="24a89068cba3a824087db65dac8b4a4a06bfd0288b9ee5c8c9d970b1a34f5d5e" Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.748641 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.749462 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-central-agent" containerID="cri-o://62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9" gracePeriod=30 Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.749501 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="sg-core" containerID="cri-o://c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844" gracePeriod=30 Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.749515 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="proxy-httpd" containerID="cri-o://8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1" gracePeriod=30 Jan 26 19:28:18 crc kubenswrapper[4787]: I0126 19:28:18.749526 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-notification-agent" containerID="cri-o://8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0" gracePeriod=30 Jan 26 19:28:18 crc kubenswrapper[4787]: E0126 19:28:18.825365 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93e77987_fd7e_41c8_af53_afbbc13b6f5b.slice/crio-406447dd4d1cecb00d552ca39893639ddf4ddb1ee4961891d605f09496511cee\": RecentStats: unable to find data in memory cache]" Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.549629 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce9ead37-b085-4ea7-aa36-65471119a627" containerID="8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1" exitCode=0 Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.549982 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce9ead37-b085-4ea7-aa36-65471119a627" containerID="c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844" exitCode=2 Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.549996 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce9ead37-b085-4ea7-aa36-65471119a627" containerID="62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9" exitCode=0 Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.550023 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerDied","Data":"8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1"} Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.550055 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerDied","Data":"c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844"} Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.550082 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerDied","Data":"62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9"} Jan 26 19:28:19 crc kubenswrapper[4787]: I0126 19:28:19.605364 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" path="/var/lib/kubelet/pods/93e77987-fd7e-41c8-af53-afbbc13b6f5b/volumes" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.192606 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.343262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-config-data\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.343334 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-sg-core-conf-yaml\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.343379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng7kz\" (UniqueName: \"kubernetes.io/projected/ce9ead37-b085-4ea7-aa36-65471119a627-kube-api-access-ng7kz\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.343398 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-run-httpd\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.343425 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-combined-ca-bundle\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.343447 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-log-httpd\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.344020 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.344215 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-scripts\") pod \"ce9ead37-b085-4ea7-aa36-65471119a627\" (UID: \"ce9ead37-b085-4ea7-aa36-65471119a627\") " Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.344610 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.345428 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.345448 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9ead37-b085-4ea7-aa36-65471119a627-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.349876 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9ead37-b085-4ea7-aa36-65471119a627-kube-api-access-ng7kz" (OuterVolumeSpecName: "kube-api-access-ng7kz") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "kube-api-access-ng7kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.355157 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-scripts" (OuterVolumeSpecName: "scripts") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.373507 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.441811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.447533 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.447565 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng7kz\" (UniqueName: \"kubernetes.io/projected/ce9ead37-b085-4ea7-aa36-65471119a627-kube-api-access-ng7kz\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.447576 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.447585 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.456399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-config-data" (OuterVolumeSpecName: "config-data") pod "ce9ead37-b085-4ea7-aa36-65471119a627" (UID: "ce9ead37-b085-4ea7-aa36-65471119a627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.549737 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ead37-b085-4ea7-aa36-65471119a627-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.571551 4787 generic.go:334] "Generic (PLEG): container finished" podID="ce9ead37-b085-4ea7-aa36-65471119a627" containerID="8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0" exitCode=0 Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.571600 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerDied","Data":"8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0"} Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.571611 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.571631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9ead37-b085-4ea7-aa36-65471119a627","Type":"ContainerDied","Data":"18d049f7ae44d620d0607c0b87a075cdfaed5f66299c05fd7dcdbbd8d7b63235"} Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.571655 4787 scope.go:117] "RemoveContainer" containerID="8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.591416 4787 scope.go:117] "RemoveContainer" containerID="c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.613414 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.621074 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.622738 4787 scope.go:117] "RemoveContainer" containerID="8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.641603 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.642102 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-central-agent" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642125 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-central-agent" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.642151 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerName="dnsmasq-dns" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642158 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerName="dnsmasq-dns" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.642171 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerName="init" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642177 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerName="init" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.642193 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="proxy-httpd" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642199 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="proxy-httpd" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.642228 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-notification-agent" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642235 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-notification-agent" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.642245 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="sg-core" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642251 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="sg-core" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642458 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="proxy-httpd" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642473 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e77987-fd7e-41c8-af53-afbbc13b6f5b" containerName="dnsmasq-dns" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642486 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="sg-core" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642510 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-notification-agent" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.642519 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" containerName="ceilometer-central-agent" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.644663 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.650206 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.650006 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.663192 4787 scope.go:117] "RemoveContainer" containerID="62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.670448 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.695112 4787 scope.go:117] "RemoveContainer" containerID="8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.695593 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1\": container with ID starting with 8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1 not found: ID does not exist" containerID="8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.695621 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1"} err="failed to get container status \"8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1\": rpc error: code = NotFound desc = could not find container \"8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1\": container with ID starting with 8bcff1c2da22a1dc7e3d8724038f8c33e5577ab23ce2df8773c4e868c16652f1 not found: ID does not exist" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.695641 4787 scope.go:117] "RemoveContainer" containerID="c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.695935 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844\": container with ID starting with c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844 not found: ID does not exist" containerID="c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.695981 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844"} err="failed to get container status \"c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844\": rpc error: code = NotFound desc = could not find container \"c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844\": container with ID starting with c2c41a9f8c4c5f072df99bbbefe2ccec1405287b6f14acf6d4cd159c0c684844 not found: ID does not exist" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.695996 4787 scope.go:117] "RemoveContainer" containerID="8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.696241 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0\": container with ID starting with 8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0 not found: ID does not exist" containerID="8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.696325 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0"} err="failed to get container status \"8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0\": rpc error: code = NotFound desc = could not find container \"8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0\": container with ID starting with 8f829062febc1b0af0700c443ee7cbf94cb6c75898860f6cbed237a38ef04ab0 not found: ID does not exist" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.696341 4787 scope.go:117] "RemoveContainer" containerID="62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9" Jan 26 19:28:21 crc kubenswrapper[4787]: E0126 19:28:21.696695 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9\": container with ID starting with 62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9 not found: ID does not exist" containerID="62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.696720 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9"} err="failed to get container status \"62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9\": rpc error: code = NotFound desc = could not find container \"62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9\": container with ID starting with 62597aef040bf0572bff6378fbc7becdbb35c915ba7a5d248af3aa87ec7ca8b9 not found: ID does not exist" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.754820 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-scripts\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.754994 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.755047 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7354e50d-99c1-4807-887d-5debe519ff46-run-httpd\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.755283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8v6d\" (UniqueName: \"kubernetes.io/projected/7354e50d-99c1-4807-887d-5debe519ff46-kube-api-access-p8v6d\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.755354 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-config-data\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.755561 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.755607 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7354e50d-99c1-4807-887d-5debe519ff46-log-httpd\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.857508 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-scripts\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.857582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.857616 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7354e50d-99c1-4807-887d-5debe519ff46-run-httpd\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.857677 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8v6d\" (UniqueName: \"kubernetes.io/projected/7354e50d-99c1-4807-887d-5debe519ff46-kube-api-access-p8v6d\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.857717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-config-data\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.858650 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7354e50d-99c1-4807-887d-5debe519ff46-run-httpd\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.862899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-config-data\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.862921 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.863770 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.863831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7354e50d-99c1-4807-887d-5debe519ff46-log-httpd\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.864114 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-scripts\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.864537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7354e50d-99c1-4807-887d-5debe519ff46-log-httpd\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.871541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354e50d-99c1-4807-887d-5debe519ff46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.879055 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8v6d\" (UniqueName: \"kubernetes.io/projected/7354e50d-99c1-4807-887d-5debe519ff46-kube-api-access-p8v6d\") pod \"ceilometer-0\" (UID: \"7354e50d-99c1-4807-887d-5debe519ff46\") " pod="openstack/ceilometer-0" Jan 26 19:28:21 crc kubenswrapper[4787]: I0126 19:28:21.962502 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:28:22 crc kubenswrapper[4787]: I0126 19:28:22.488551 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:28:22 crc kubenswrapper[4787]: W0126 19:28:22.497786 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7354e50d_99c1_4807_887d_5debe519ff46.slice/crio-6b8439ffb97c3bb033044d79eb32a92d0fa943f7d13dd3aa002de73a3c4138b7 WatchSource:0}: Error finding container 6b8439ffb97c3bb033044d79eb32a92d0fa943f7d13dd3aa002de73a3c4138b7: Status 404 returned error can't find the container with id 6b8439ffb97c3bb033044d79eb32a92d0fa943f7d13dd3aa002de73a3c4138b7 Jan 26 19:28:22 crc kubenswrapper[4787]: I0126 19:28:22.587574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7354e50d-99c1-4807-887d-5debe519ff46","Type":"ContainerStarted","Data":"6b8439ffb97c3bb033044d79eb32a92d0fa943f7d13dd3aa002de73a3c4138b7"} Jan 26 19:28:23 crc kubenswrapper[4787]: I0126 19:28:23.627642 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9ead37-b085-4ea7-aa36-65471119a627" path="/var/lib/kubelet/pods/ce9ead37-b085-4ea7-aa36-65471119a627/volumes" Jan 26 19:28:23 crc kubenswrapper[4787]: I0126 19:28:23.628842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7354e50d-99c1-4807-887d-5debe519ff46","Type":"ContainerStarted","Data":"ec07f901c0240492ed2a76d83446f3a13a48c45d3831a21fcbb0eca30b86b377"} Jan 26 19:28:24 crc kubenswrapper[4787]: I0126 19:28:24.637727 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7354e50d-99c1-4807-887d-5debe519ff46","Type":"ContainerStarted","Data":"9e4db0e38c5c6aa8f3c0b00c0d1fc55d410ba96dbd91db3a856c6d02fe321937"} Jan 26 19:28:25 crc kubenswrapper[4787]: I0126 19:28:25.590630 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:28:25 crc kubenswrapper[4787]: E0126 19:28:25.591254 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:28:25 crc kubenswrapper[4787]: I0126 19:28:25.649081 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7354e50d-99c1-4807-887d-5debe519ff46","Type":"ContainerStarted","Data":"41aaed726f768bdea3169771a02391cdd708f8dcdbdc0de12f3b1efd6628365d"} Jan 26 19:28:25 crc kubenswrapper[4787]: I0126 19:28:25.980408 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 26 19:28:26 crc kubenswrapper[4787]: I0126 19:28:26.661378 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7354e50d-99c1-4807-887d-5debe519ff46","Type":"ContainerStarted","Data":"575e94cb234c32032a9a35bd52d2c6ca9480986637384dc4b5f9255ca64717fa"} Jan 26 19:28:26 crc kubenswrapper[4787]: I0126 19:28:26.662591 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:28:27 crc kubenswrapper[4787]: I0126 19:28:27.603807 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 26 19:28:27 crc kubenswrapper[4787]: I0126 19:28:27.622319 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.317077684 podStartE2EDuration="6.622297204s" podCreationTimestamp="2026-01-26 19:28:21 +0000 UTC" firstStartedPulling="2026-01-26 19:28:22.50102741 +0000 UTC m=+6271.208163543" lastFinishedPulling="2026-01-26 19:28:25.80624693 +0000 UTC m=+6274.513383063" observedRunningTime="2026-01-26 19:28:26.684724318 +0000 UTC m=+6275.391860451" watchObservedRunningTime="2026-01-26 19:28:27.622297204 +0000 UTC m=+6276.329433337" Jan 26 19:28:27 crc kubenswrapper[4787]: I0126 19:28:27.727927 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 26 19:28:27 crc kubenswrapper[4787]: I0126 19:28:27.767353 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 26 19:28:37 crc kubenswrapper[4787]: I0126 19:28:37.595550 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:28:37 crc kubenswrapper[4787]: E0126 19:28:37.596993 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:28:48 crc kubenswrapper[4787]: I0126 19:28:48.589521 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:28:49 crc kubenswrapper[4787]: I0126 19:28:49.895419 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"25ad5d5ab6da5a26ef32edfd02a6c7f57994a38dadf7f56c7c0ef086287b79ae"} Jan 26 19:28:51 crc kubenswrapper[4787]: I0126 19:28:51.967830 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.652496 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67f665c9bf-hj65b"] Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.656486 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.658272 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.666444 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67f665c9bf-hj65b"] Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.771412 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-sb\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.771490 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzzr\" (UniqueName: \"kubernetes.io/projected/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-kube-api-access-dbzzr\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.771522 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-openstack-cell1\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.771733 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-config\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.771805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-nb\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.771942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-dns-svc\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.876440 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-sb\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.876667 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzzr\" (UniqueName: \"kubernetes.io/projected/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-kube-api-access-dbzzr\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.876733 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-openstack-cell1\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.876800 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-config\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.876835 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-nb\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.876919 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-dns-svc\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.878696 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-openstack-cell1\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.878797 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-sb\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.879616 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-config\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.880149 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-dns-svc\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.880333 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-nb\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.928201 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzzr\" (UniqueName: \"kubernetes.io/projected/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-kube-api-access-dbzzr\") pod \"dnsmasq-dns-67f665c9bf-hj65b\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:10 crc kubenswrapper[4787]: I0126 19:29:10.977813 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:11 crc kubenswrapper[4787]: I0126 19:29:11.512974 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67f665c9bf-hj65b"] Jan 26 19:29:12 crc kubenswrapper[4787]: I0126 19:29:12.133423 4787 generic.go:334] "Generic (PLEG): container finished" podID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerID="e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77" exitCode=0 Jan 26 19:29:12 crc kubenswrapper[4787]: I0126 19:29:12.133492 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" event={"ID":"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253","Type":"ContainerDied","Data":"e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77"} Jan 26 19:29:12 crc kubenswrapper[4787]: I0126 19:29:12.133736 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" event={"ID":"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253","Type":"ContainerStarted","Data":"9d430e5d0b90a60fd652b2bb70a1f9879ebb7c5308a9c8b6465feaf4d3ec2239"} Jan 26 19:29:13 crc kubenswrapper[4787]: I0126 19:29:13.150602 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" event={"ID":"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253","Type":"ContainerStarted","Data":"78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0"} Jan 26 19:29:13 crc kubenswrapper[4787]: I0126 19:29:13.153168 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:13 crc kubenswrapper[4787]: I0126 19:29:13.176959 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" podStartSLOduration=3.176922547 podStartE2EDuration="3.176922547s" podCreationTimestamp="2026-01-26 19:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:29:13.174652642 +0000 UTC m=+6321.881788775" watchObservedRunningTime="2026-01-26 19:29:13.176922547 +0000 UTC m=+6321.884058680" Jan 26 19:29:20 crc kubenswrapper[4787]: I0126 19:29:20.981147 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.054801 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65df876645-cqjbm"] Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.055057 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65df876645-cqjbm" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="dnsmasq-dns" containerID="cri-o://0e6da2269b17e0b8691363e24bfb1eeeb3fcc44ba0384f0ed092ecb370eb4623" gracePeriod=10 Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.170733 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b94478455-fqg6h"] Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.172478 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.190448 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b94478455-fqg6h"] Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.216385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcmq\" (UniqueName: \"kubernetes.io/projected/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-kube-api-access-wzcmq\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.216509 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.216586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-config\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.216633 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.216735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-openstack-cell1\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.216783 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-dns-svc\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.254546 4787 generic.go:334] "Generic (PLEG): container finished" podID="53889309-1901-433a-8ba7-daa0657ca258" containerID="0e6da2269b17e0b8691363e24bfb1eeeb3fcc44ba0384f0ed092ecb370eb4623" exitCode=0 Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.254602 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65df876645-cqjbm" event={"ID":"53889309-1901-433a-8ba7-daa0657ca258","Type":"ContainerDied","Data":"0e6da2269b17e0b8691363e24bfb1eeeb3fcc44ba0384f0ed092ecb370eb4623"} Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.286520 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-65df876645-cqjbm" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.139:5353: connect: connection refused" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.318336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-config\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.318389 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.318459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-openstack-cell1\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.318487 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-dns-svc\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.318542 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcmq\" (UniqueName: \"kubernetes.io/projected/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-kube-api-access-wzcmq\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.318580 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.319384 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.319879 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-config\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.320445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.320918 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-openstack-cell1\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.321454 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-dns-svc\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.395403 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcmq\" (UniqueName: \"kubernetes.io/projected/58461a7b-bde1-45cf-81e3-0dae1ce65e7c-kube-api-access-wzcmq\") pod \"dnsmasq-dns-5b94478455-fqg6h\" (UID: \"58461a7b-bde1-45cf-81e3-0dae1ce65e7c\") " pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.547733 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.780997 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.836471 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-sb\") pod \"53889309-1901-433a-8ba7-daa0657ca258\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.836531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7rfl\" (UniqueName: \"kubernetes.io/projected/53889309-1901-433a-8ba7-daa0657ca258-kube-api-access-h7rfl\") pod \"53889309-1901-433a-8ba7-daa0657ca258\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.836670 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-config\") pod \"53889309-1901-433a-8ba7-daa0657ca258\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.836751 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-dns-svc\") pod \"53889309-1901-433a-8ba7-daa0657ca258\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.836827 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-nb\") pod \"53889309-1901-433a-8ba7-daa0657ca258\" (UID: \"53889309-1901-433a-8ba7-daa0657ca258\") " Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.872073 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53889309-1901-433a-8ba7-daa0657ca258-kube-api-access-h7rfl" (OuterVolumeSpecName: "kube-api-access-h7rfl") pod "53889309-1901-433a-8ba7-daa0657ca258" (UID: "53889309-1901-433a-8ba7-daa0657ca258"). InnerVolumeSpecName "kube-api-access-h7rfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.936562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53889309-1901-433a-8ba7-daa0657ca258" (UID: "53889309-1901-433a-8ba7-daa0657ca258"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.939246 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.939275 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7rfl\" (UniqueName: \"kubernetes.io/projected/53889309-1901-433a-8ba7-daa0657ca258-kube-api-access-h7rfl\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.958534 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-config" (OuterVolumeSpecName: "config") pod "53889309-1901-433a-8ba7-daa0657ca258" (UID: "53889309-1901-433a-8ba7-daa0657ca258"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:21 crc kubenswrapper[4787]: I0126 19:29:21.960492 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53889309-1901-433a-8ba7-daa0657ca258" (UID: "53889309-1901-433a-8ba7-daa0657ca258"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.000552 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53889309-1901-433a-8ba7-daa0657ca258" (UID: "53889309-1901-433a-8ba7-daa0657ca258"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.040791 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.040827 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.040836 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53889309-1901-433a-8ba7-daa0657ca258-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.120841 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b94478455-fqg6h"] Jan 26 19:29:22 crc kubenswrapper[4787]: W0126 19:29:22.127412 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58461a7b_bde1_45cf_81e3_0dae1ce65e7c.slice/crio-2aa47b21ab4209c246552f6febf5eb9ab62e23acce8aa57c00ebc749434c3641 WatchSource:0}: Error finding container 2aa47b21ab4209c246552f6febf5eb9ab62e23acce8aa57c00ebc749434c3641: Status 404 returned error can't find the container with id 2aa47b21ab4209c246552f6febf5eb9ab62e23acce8aa57c00ebc749434c3641 Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.270544 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65df876645-cqjbm" event={"ID":"53889309-1901-433a-8ba7-daa0657ca258","Type":"ContainerDied","Data":"d90489d497ba1ef9b3fc1f3172722bc8244a0a7e5721856cbad721f7a624a1a3"} Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.270574 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65df876645-cqjbm" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.270604 4787 scope.go:117] "RemoveContainer" containerID="0e6da2269b17e0b8691363e24bfb1eeeb3fcc44ba0384f0ed092ecb370eb4623" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.285895 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" event={"ID":"58461a7b-bde1-45cf-81e3-0dae1ce65e7c","Type":"ContainerStarted","Data":"2aa47b21ab4209c246552f6febf5eb9ab62e23acce8aa57c00ebc749434c3641"} Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.307332 4787 scope.go:117] "RemoveContainer" containerID="88794d9ce369f495b7259b778fe2f30e0d3abf4ba9e552456a92786cf50e2601" Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.348779 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65df876645-cqjbm"] Jan 26 19:29:22 crc kubenswrapper[4787]: I0126 19:29:22.357286 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65df876645-cqjbm"] Jan 26 19:29:23 crc kubenswrapper[4787]: I0126 19:29:23.299700 4787 generic.go:334] "Generic (PLEG): container finished" podID="58461a7b-bde1-45cf-81e3-0dae1ce65e7c" containerID="2fb2e27fc54448f68edf4da50bf8149028a4e7a164b7c0446b4116aff6fac271" exitCode=0 Jan 26 19:29:23 crc kubenswrapper[4787]: I0126 19:29:23.299779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" event={"ID":"58461a7b-bde1-45cf-81e3-0dae1ce65e7c","Type":"ContainerDied","Data":"2fb2e27fc54448f68edf4da50bf8149028a4e7a164b7c0446b4116aff6fac271"} Jan 26 19:29:23 crc kubenswrapper[4787]: I0126 19:29:23.605114 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53889309-1901-433a-8ba7-daa0657ca258" path="/var/lib/kubelet/pods/53889309-1901-433a-8ba7-daa0657ca258/volumes" Jan 26 19:29:24 crc kubenswrapper[4787]: I0126 19:29:24.313038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" event={"ID":"58461a7b-bde1-45cf-81e3-0dae1ce65e7c","Type":"ContainerStarted","Data":"1d17fc00c4ce1e8e9455d898c2de52f6c38c756a51237de5dcb30fd126dfb4ed"} Jan 26 19:29:24 crc kubenswrapper[4787]: I0126 19:29:24.313471 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:24 crc kubenswrapper[4787]: I0126 19:29:24.337856 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" podStartSLOduration=3.33783559 podStartE2EDuration="3.33783559s" podCreationTimestamp="2026-01-26 19:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:29:24.327888216 +0000 UTC m=+6333.035024379" watchObservedRunningTime="2026-01-26 19:29:24.33783559 +0000 UTC m=+6333.044971723" Jan 26 19:29:31 crc kubenswrapper[4787]: I0126 19:29:31.550210 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b94478455-fqg6h" Jan 26 19:29:31 crc kubenswrapper[4787]: I0126 19:29:31.630890 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67f665c9bf-hj65b"] Jan 26 19:29:31 crc kubenswrapper[4787]: I0126 19:29:31.631365 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerName="dnsmasq-dns" containerID="cri-o://78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0" gracePeriod=10 Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.205236 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.287338 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-sb\") pod \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.287544 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbzzr\" (UniqueName: \"kubernetes.io/projected/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-kube-api-access-dbzzr\") pod \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.287600 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-config\") pod \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.287714 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-nb\") pod \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.287844 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-dns-svc\") pod \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.287916 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-openstack-cell1\") pod \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\" (UID: \"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253\") " Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.298217 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-kube-api-access-dbzzr" (OuterVolumeSpecName: "kube-api-access-dbzzr") pod "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" (UID: "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253"). InnerVolumeSpecName "kube-api-access-dbzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.345973 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" (UID: "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.351405 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" (UID: "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.360880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" (UID: "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.363178 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" (UID: "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.366294 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-config" (OuterVolumeSpecName: "config") pod "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" (UID: "4b3e6c3b-2cb3-4194-82f7-6a9053e4b253"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.390838 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbzzr\" (UniqueName: \"kubernetes.io/projected/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-kube-api-access-dbzzr\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.390871 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.390882 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.390890 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.390898 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.390905 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.410017 4787 generic.go:334] "Generic (PLEG): container finished" podID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerID="78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0" exitCode=0 Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.410060 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.410067 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" event={"ID":"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253","Type":"ContainerDied","Data":"78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0"} Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.410095 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f665c9bf-hj65b" event={"ID":"4b3e6c3b-2cb3-4194-82f7-6a9053e4b253","Type":"ContainerDied","Data":"9d430e5d0b90a60fd652b2bb70a1f9879ebb7c5308a9c8b6465feaf4d3ec2239"} Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.410115 4787 scope.go:117] "RemoveContainer" containerID="78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.451252 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67f665c9bf-hj65b"] Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.451872 4787 scope.go:117] "RemoveContainer" containerID="e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.459977 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67f665c9bf-hj65b"] Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.471315 4787 scope.go:117] "RemoveContainer" containerID="78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0" Jan 26 19:29:32 crc kubenswrapper[4787]: E0126 19:29:32.471654 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0\": container with ID starting with 78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0 not found: ID does not exist" containerID="78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.471689 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0"} err="failed to get container status \"78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0\": rpc error: code = NotFound desc = could not find container \"78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0\": container with ID starting with 78efcdba282cb612d2c2f3ea7bbe223e24c09147ace0793e29eb9bb7a89c07e0 not found: ID does not exist" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.471713 4787 scope.go:117] "RemoveContainer" containerID="e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77" Jan 26 19:29:32 crc kubenswrapper[4787]: E0126 19:29:32.471997 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77\": container with ID starting with e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77 not found: ID does not exist" containerID="e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77" Jan 26 19:29:32 crc kubenswrapper[4787]: I0126 19:29:32.472029 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77"} err="failed to get container status \"e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77\": rpc error: code = NotFound desc = could not find container \"e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77\": container with ID starting with e9d00026c761173cbe7a9a9a96980b4f7705d9f31dc570a8b2ebf20993e8db77 not found: ID does not exist" Jan 26 19:29:33 crc kubenswrapper[4787]: I0126 19:29:33.600873 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" path="/var/lib/kubelet/pods/4b3e6c3b-2cb3-4194-82f7-6a9053e4b253/volumes" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.173809 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64"] Jan 26 19:29:42 crc kubenswrapper[4787]: E0126 19:29:42.175024 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="dnsmasq-dns" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.175042 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="dnsmasq-dns" Jan 26 19:29:42 crc kubenswrapper[4787]: E0126 19:29:42.175062 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerName="init" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.175070 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerName="init" Jan 26 19:29:42 crc kubenswrapper[4787]: E0126 19:29:42.175090 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerName="dnsmasq-dns" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.175098 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerName="dnsmasq-dns" Jan 26 19:29:42 crc kubenswrapper[4787]: E0126 19:29:42.175116 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="init" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.175125 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="init" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.175404 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="53889309-1901-433a-8ba7-daa0657ca258" containerName="dnsmasq-dns" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.175447 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3e6c3b-2cb3-4194-82f7-6a9053e4b253" containerName="dnsmasq-dns" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.176368 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.180103 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.180451 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.180732 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.183319 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.191171 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64"] Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.316259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.316310 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4kh\" (UniqueName: \"kubernetes.io/projected/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-kube-api-access-tf4kh\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.316525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.316558 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.316663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.418573 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.418673 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4kh\" (UniqueName: \"kubernetes.io/projected/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-kube-api-access-tf4kh\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.418835 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.418876 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.418982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.424686 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.425000 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.427992 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.432565 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.437107 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4kh\" (UniqueName: \"kubernetes.io/projected/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-kube-api-access-tf4kh\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-crmx64\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:42 crc kubenswrapper[4787]: I0126 19:29:42.513246 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:29:43 crc kubenswrapper[4787]: I0126 19:29:43.071518 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64"] Jan 26 19:29:43 crc kubenswrapper[4787]: W0126 19:29:43.077320 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod716536ef_6d7f_4cc7_8e5b_5cc361d89e85.slice/crio-a16b7606ce5dad2cfe8b8e4e1d471e6293446b8deea33a4ba8ef0c9b7d17dc74 WatchSource:0}: Error finding container a16b7606ce5dad2cfe8b8e4e1d471e6293446b8deea33a4ba8ef0c9b7d17dc74: Status 404 returned error can't find the container with id a16b7606ce5dad2cfe8b8e4e1d471e6293446b8deea33a4ba8ef0c9b7d17dc74 Jan 26 19:29:43 crc kubenswrapper[4787]: I0126 19:29:43.080640 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:29:43 crc kubenswrapper[4787]: I0126 19:29:43.521881 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" event={"ID":"716536ef-6d7f-4cc7-8e5b-5cc361d89e85","Type":"ContainerStarted","Data":"a16b7606ce5dad2cfe8b8e4e1d471e6293446b8deea33a4ba8ef0c9b7d17dc74"} Jan 26 19:29:54 crc kubenswrapper[4787]: I0126 19:29:54.330562 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:29:54 crc kubenswrapper[4787]: I0126 19:29:54.667536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" event={"ID":"716536ef-6d7f-4cc7-8e5b-5cc361d89e85","Type":"ContainerStarted","Data":"2de3707624504ec54b5d636deb87285c611cc7b42a463b4e3aea55be64dc1c1e"} Jan 26 19:29:54 crc kubenswrapper[4787]: I0126 19:29:54.689565 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" podStartSLOduration=1.442168744 podStartE2EDuration="12.689546846s" podCreationTimestamp="2026-01-26 19:29:42 +0000 UTC" firstStartedPulling="2026-01-26 19:29:43.080447626 +0000 UTC m=+6351.787583759" lastFinishedPulling="2026-01-26 19:29:54.327825718 +0000 UTC m=+6363.034961861" observedRunningTime="2026-01-26 19:29:54.683803835 +0000 UTC m=+6363.390939978" watchObservedRunningTime="2026-01-26 19:29:54.689546846 +0000 UTC m=+6363.396682979" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.451752 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pzglb"] Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.462329 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.471464 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzglb"] Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.478231 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-catalog-content\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.478349 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-utilities\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.478521 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmt5t\" (UniqueName: \"kubernetes.io/projected/bf27858f-252c-4819-a189-e39eba30f1cc-kube-api-access-qmt5t\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.581313 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmt5t\" (UniqueName: \"kubernetes.io/projected/bf27858f-252c-4819-a189-e39eba30f1cc-kube-api-access-qmt5t\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.581594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-catalog-content\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.581636 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-utilities\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.582423 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-utilities\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.582443 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-catalog-content\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.615607 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmt5t\" (UniqueName: \"kubernetes.io/projected/bf27858f-252c-4819-a189-e39eba30f1cc-kube-api-access-qmt5t\") pod \"certified-operators-pzglb\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:57 crc kubenswrapper[4787]: I0126 19:29:57.815711 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:29:58 crc kubenswrapper[4787]: I0126 19:29:58.305568 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzglb"] Jan 26 19:29:58 crc kubenswrapper[4787]: I0126 19:29:58.702180 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf27858f-252c-4819-a189-e39eba30f1cc" containerID="f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83" exitCode=0 Jan 26 19:29:58 crc kubenswrapper[4787]: I0126 19:29:58.702238 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerDied","Data":"f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83"} Jan 26 19:29:58 crc kubenswrapper[4787]: I0126 19:29:58.702494 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerStarted","Data":"7cfba19dc8fd8e7386345b11df963a61e76adaeef0922363bfca9c2e0404c828"} Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.148387 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q"] Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.150989 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.154700 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.154724 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.166779 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q"] Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.233888 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd41c38b-22dc-47b3-8861-c463b95f4201-config-volume\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.233970 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd41c38b-22dc-47b3-8861-c463b95f4201-secret-volume\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.234097 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n822\" (UniqueName: \"kubernetes.io/projected/cd41c38b-22dc-47b3-8861-c463b95f4201-kube-api-access-5n822\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.336533 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n822\" (UniqueName: \"kubernetes.io/projected/cd41c38b-22dc-47b3-8861-c463b95f4201-kube-api-access-5n822\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.336733 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd41c38b-22dc-47b3-8861-c463b95f4201-config-volume\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.336803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd41c38b-22dc-47b3-8861-c463b95f4201-secret-volume\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.337923 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd41c38b-22dc-47b3-8861-c463b95f4201-config-volume\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.354153 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd41c38b-22dc-47b3-8861-c463b95f4201-secret-volume\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.367696 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n822\" (UniqueName: \"kubernetes.io/projected/cd41c38b-22dc-47b3-8861-c463b95f4201-kube-api-access-5n822\") pod \"collect-profiles-29490930-bhr8q\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.482636 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:00 crc kubenswrapper[4787]: I0126 19:30:00.780392 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerStarted","Data":"03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11"} Jan 26 19:30:01 crc kubenswrapper[4787]: I0126 19:30:01.328356 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q"] Jan 26 19:30:01 crc kubenswrapper[4787]: I0126 19:30:01.793491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" event={"ID":"cd41c38b-22dc-47b3-8861-c463b95f4201","Type":"ContainerStarted","Data":"221b234b47d81ee51e4984f84926ffc8eb31aab174ef735ad37d5f822c643636"} Jan 26 19:30:01 crc kubenswrapper[4787]: I0126 19:30:01.970935 4787 scope.go:117] "RemoveContainer" containerID="3132cbb1395416c0894b5e772e4c43286622faf505f5fc9254937c3e38300c6b" Jan 26 19:30:02 crc kubenswrapper[4787]: I0126 19:30:02.155009 4787 scope.go:117] "RemoveContainer" containerID="0f9ab83d2349a71967960d1673e6245a78af58a932df3ba9ae5b78f7c4957451" Jan 26 19:30:02 crc kubenswrapper[4787]: I0126 19:30:02.813283 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf27858f-252c-4819-a189-e39eba30f1cc" containerID="03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11" exitCode=0 Jan 26 19:30:02 crc kubenswrapper[4787]: I0126 19:30:02.813362 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerDied","Data":"03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11"} Jan 26 19:30:02 crc kubenswrapper[4787]: I0126 19:30:02.816144 4787 generic.go:334] "Generic (PLEG): container finished" podID="cd41c38b-22dc-47b3-8861-c463b95f4201" containerID="3c0a5d7401929cd77daf2f526941ede61c8d9e908d7a666c5e7de75b2e980238" exitCode=0 Jan 26 19:30:02 crc kubenswrapper[4787]: I0126 19:30:02.816194 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" event={"ID":"cd41c38b-22dc-47b3-8861-c463b95f4201","Type":"ContainerDied","Data":"3c0a5d7401929cd77daf2f526941ede61c8d9e908d7a666c5e7de75b2e980238"} Jan 26 19:30:03 crc kubenswrapper[4787]: I0126 19:30:03.829211 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerStarted","Data":"d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2"} Jan 26 19:30:03 crc kubenswrapper[4787]: I0126 19:30:03.859521 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pzglb" podStartSLOduration=2.273896786 podStartE2EDuration="6.859500207s" podCreationTimestamp="2026-01-26 19:29:57 +0000 UTC" firstStartedPulling="2026-01-26 19:29:58.704710573 +0000 UTC m=+6367.411846706" lastFinishedPulling="2026-01-26 19:30:03.290313994 +0000 UTC m=+6371.997450127" observedRunningTime="2026-01-26 19:30:03.847128865 +0000 UTC m=+6372.554265008" watchObservedRunningTime="2026-01-26 19:30:03.859500207 +0000 UTC m=+6372.566636340" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.270551 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.461164 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n822\" (UniqueName: \"kubernetes.io/projected/cd41c38b-22dc-47b3-8861-c463b95f4201-kube-api-access-5n822\") pod \"cd41c38b-22dc-47b3-8861-c463b95f4201\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.461232 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd41c38b-22dc-47b3-8861-c463b95f4201-secret-volume\") pod \"cd41c38b-22dc-47b3-8861-c463b95f4201\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.461271 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd41c38b-22dc-47b3-8861-c463b95f4201-config-volume\") pod \"cd41c38b-22dc-47b3-8861-c463b95f4201\" (UID: \"cd41c38b-22dc-47b3-8861-c463b95f4201\") " Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.462230 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd41c38b-22dc-47b3-8861-c463b95f4201-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd41c38b-22dc-47b3-8861-c463b95f4201" (UID: "cd41c38b-22dc-47b3-8861-c463b95f4201"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.466996 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd41c38b-22dc-47b3-8861-c463b95f4201-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd41c38b-22dc-47b3-8861-c463b95f4201" (UID: "cd41c38b-22dc-47b3-8861-c463b95f4201"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.467585 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd41c38b-22dc-47b3-8861-c463b95f4201-kube-api-access-5n822" (OuterVolumeSpecName: "kube-api-access-5n822") pod "cd41c38b-22dc-47b3-8861-c463b95f4201" (UID: "cd41c38b-22dc-47b3-8861-c463b95f4201"). InnerVolumeSpecName "kube-api-access-5n822". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.564357 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n822\" (UniqueName: \"kubernetes.io/projected/cd41c38b-22dc-47b3-8861-c463b95f4201-kube-api-access-5n822\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.564399 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd41c38b-22dc-47b3-8861-c463b95f4201-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.564408 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd41c38b-22dc-47b3-8861-c463b95f4201-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.840756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" event={"ID":"cd41c38b-22dc-47b3-8861-c463b95f4201","Type":"ContainerDied","Data":"221b234b47d81ee51e4984f84926ffc8eb31aab174ef735ad37d5f822c643636"} Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.840799 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221b234b47d81ee51e4984f84926ffc8eb31aab174ef735ad37d5f822c643636" Jan 26 19:30:04 crc kubenswrapper[4787]: I0126 19:30:04.840811 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q" Jan 26 19:30:05 crc kubenswrapper[4787]: I0126 19:30:05.354649 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj"] Jan 26 19:30:05 crc kubenswrapper[4787]: I0126 19:30:05.363301 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-4j7kj"] Jan 26 19:30:05 crc kubenswrapper[4787]: I0126 19:30:05.603992 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f717fe1-1edc-4072-8271-4116ac22a6df" path="/var/lib/kubelet/pods/0f717fe1-1edc-4072-8271-4116ac22a6df/volumes" Jan 26 19:30:07 crc kubenswrapper[4787]: I0126 19:30:07.816080 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:30:07 crc kubenswrapper[4787]: I0126 19:30:07.816796 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:30:07 crc kubenswrapper[4787]: I0126 19:30:07.865686 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:30:07 crc kubenswrapper[4787]: I0126 19:30:07.872121 4787 generic.go:334] "Generic (PLEG): container finished" podID="716536ef-6d7f-4cc7-8e5b-5cc361d89e85" containerID="2de3707624504ec54b5d636deb87285c611cc7b42a463b4e3aea55be64dc1c1e" exitCode=0 Jan 26 19:30:07 crc kubenswrapper[4787]: I0126 19:30:07.873199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" event={"ID":"716536ef-6d7f-4cc7-8e5b-5cc361d89e85","Type":"ContainerDied","Data":"2de3707624504ec54b5d636deb87285c611cc7b42a463b4e3aea55be64dc1c1e"} Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.401235 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.578669 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-inventory\") pod \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.578729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ssh-key-openstack-cell1\") pod \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.578861 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-pre-adoption-validation-combined-ca-bundle\") pod \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.578911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf4kh\" (UniqueName: \"kubernetes.io/projected/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-kube-api-access-tf4kh\") pod \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.579083 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ceph\") pod \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\" (UID: \"716536ef-6d7f-4cc7-8e5b-5cc361d89e85\") " Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.583831 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ceph" (OuterVolumeSpecName: "ceph") pod "716536ef-6d7f-4cc7-8e5b-5cc361d89e85" (UID: "716536ef-6d7f-4cc7-8e5b-5cc361d89e85"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.584188 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "716536ef-6d7f-4cc7-8e5b-5cc361d89e85" (UID: "716536ef-6d7f-4cc7-8e5b-5cc361d89e85"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.594182 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-kube-api-access-tf4kh" (OuterVolumeSpecName: "kube-api-access-tf4kh") pod "716536ef-6d7f-4cc7-8e5b-5cc361d89e85" (UID: "716536ef-6d7f-4cc7-8e5b-5cc361d89e85"). InnerVolumeSpecName "kube-api-access-tf4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.608729 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-inventory" (OuterVolumeSpecName: "inventory") pod "716536ef-6d7f-4cc7-8e5b-5cc361d89e85" (UID: "716536ef-6d7f-4cc7-8e5b-5cc361d89e85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.620601 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "716536ef-6d7f-4cc7-8e5b-5cc361d89e85" (UID: "716536ef-6d7f-4cc7-8e5b-5cc361d89e85"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.683520 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.683797 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.683814 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.683829 4787 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.683842 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf4kh\" (UniqueName: \"kubernetes.io/projected/716536ef-6d7f-4cc7-8e5b-5cc361d89e85-kube-api-access-tf4kh\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.892765 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" event={"ID":"716536ef-6d7f-4cc7-8e5b-5cc361d89e85","Type":"ContainerDied","Data":"a16b7606ce5dad2cfe8b8e4e1d471e6293446b8deea33a4ba8ef0c9b7d17dc74"} Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.892804 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16b7606ce5dad2cfe8b8e4e1d471e6293446b8deea33a4ba8ef0c9b7d17dc74" Jan 26 19:30:09 crc kubenswrapper[4787]: I0126 19:30:09.892866 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-crmx64" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.855423 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22"] Jan 26 19:30:14 crc kubenswrapper[4787]: E0126 19:30:14.856513 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd41c38b-22dc-47b3-8861-c463b95f4201" containerName="collect-profiles" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.856528 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd41c38b-22dc-47b3-8861-c463b95f4201" containerName="collect-profiles" Jan 26 19:30:14 crc kubenswrapper[4787]: E0126 19:30:14.856556 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716536ef-6d7f-4cc7-8e5b-5cc361d89e85" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.856566 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="716536ef-6d7f-4cc7-8e5b-5cc361d89e85" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.856830 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="716536ef-6d7f-4cc7-8e5b-5cc361d89e85" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.856852 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd41c38b-22dc-47b3-8861-c463b95f4201" containerName="collect-profiles" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.857816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.862036 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.862258 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.862709 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.876881 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22"] Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.881654 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.990731 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.990795 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.990821 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.990861 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:14 crc kubenswrapper[4787]: I0126 19:30:14.990914 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpp7\" (UniqueName: \"kubernetes.io/projected/c86490c4-11c2-4a38-9de0-cdb076526ea1-kube-api-access-dnpp7\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.094491 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.094827 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.094867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.094913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.094998 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnpp7\" (UniqueName: \"kubernetes.io/projected/c86490c4-11c2-4a38-9de0-cdb076526ea1-kube-api-access-dnpp7\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.101622 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.101750 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.102005 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.103005 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.116727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnpp7\" (UniqueName: \"kubernetes.io/projected/c86490c4-11c2-4a38-9de0-cdb076526ea1-kube-api-access-dnpp7\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.182769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.787143 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22"] Jan 26 19:30:15 crc kubenswrapper[4787]: I0126 19:30:15.952085 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" event={"ID":"c86490c4-11c2-4a38-9de0-cdb076526ea1","Type":"ContainerStarted","Data":"139726cb94a854203734c894b36c402b59013c4286047a98e60ecd861f3f202e"} Jan 26 19:30:16 crc kubenswrapper[4787]: I0126 19:30:16.977960 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" event={"ID":"c86490c4-11c2-4a38-9de0-cdb076526ea1","Type":"ContainerStarted","Data":"dff20b46aeaa92464c3d360f92f313e281cc7713c55e362317aa78769207a948"} Jan 26 19:30:17 crc kubenswrapper[4787]: I0126 19:30:17.006674 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" podStartSLOduration=2.584320681 podStartE2EDuration="3.006650004s" podCreationTimestamp="2026-01-26 19:30:14 +0000 UTC" firstStartedPulling="2026-01-26 19:30:15.7888864 +0000 UTC m=+6384.496022533" lastFinishedPulling="2026-01-26 19:30:16.211215723 +0000 UTC m=+6384.918351856" observedRunningTime="2026-01-26 19:30:16.996232079 +0000 UTC m=+6385.703368212" watchObservedRunningTime="2026-01-26 19:30:17.006650004 +0000 UTC m=+6385.713786137" Jan 26 19:30:17 crc kubenswrapper[4787]: I0126 19:30:17.867928 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:30:17 crc kubenswrapper[4787]: I0126 19:30:17.917719 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzglb"] Jan 26 19:30:17 crc kubenswrapper[4787]: I0126 19:30:17.986127 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pzglb" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="registry-server" containerID="cri-o://d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2" gracePeriod=2 Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.546871 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.706608 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-catalog-content\") pod \"bf27858f-252c-4819-a189-e39eba30f1cc\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.706790 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-utilities\") pod \"bf27858f-252c-4819-a189-e39eba30f1cc\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.706846 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmt5t\" (UniqueName: \"kubernetes.io/projected/bf27858f-252c-4819-a189-e39eba30f1cc-kube-api-access-qmt5t\") pod \"bf27858f-252c-4819-a189-e39eba30f1cc\" (UID: \"bf27858f-252c-4819-a189-e39eba30f1cc\") " Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.709284 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-utilities" (OuterVolumeSpecName: "utilities") pod "bf27858f-252c-4819-a189-e39eba30f1cc" (UID: "bf27858f-252c-4819-a189-e39eba30f1cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.715868 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf27858f-252c-4819-a189-e39eba30f1cc-kube-api-access-qmt5t" (OuterVolumeSpecName: "kube-api-access-qmt5t") pod "bf27858f-252c-4819-a189-e39eba30f1cc" (UID: "bf27858f-252c-4819-a189-e39eba30f1cc"). InnerVolumeSpecName "kube-api-access-qmt5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.759833 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf27858f-252c-4819-a189-e39eba30f1cc" (UID: "bf27858f-252c-4819-a189-e39eba30f1cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.809113 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.809143 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmt5t\" (UniqueName: \"kubernetes.io/projected/bf27858f-252c-4819-a189-e39eba30f1cc-kube-api-access-qmt5t\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.809154 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf27858f-252c-4819-a189-e39eba30f1cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.996261 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf27858f-252c-4819-a189-e39eba30f1cc" containerID="d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2" exitCode=0 Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.996348 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzglb" Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.996391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerDied","Data":"d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2"} Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.997308 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzglb" event={"ID":"bf27858f-252c-4819-a189-e39eba30f1cc","Type":"ContainerDied","Data":"7cfba19dc8fd8e7386345b11df963a61e76adaeef0922363bfca9c2e0404c828"} Jan 26 19:30:18 crc kubenswrapper[4787]: I0126 19:30:18.997342 4787 scope.go:117] "RemoveContainer" containerID="d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.024162 4787 scope.go:117] "RemoveContainer" containerID="03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.034478 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzglb"] Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.044367 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pzglb"] Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.062106 4787 scope.go:117] "RemoveContainer" containerID="f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.092553 4787 scope.go:117] "RemoveContainer" containerID="d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2" Jan 26 19:30:19 crc kubenswrapper[4787]: E0126 19:30:19.093520 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2\": container with ID starting with d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2 not found: ID does not exist" containerID="d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.093554 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2"} err="failed to get container status \"d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2\": rpc error: code = NotFound desc = could not find container \"d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2\": container with ID starting with d2b627a59acb2d66053ce939f9c8cd8c5253ad12e3b29c71c28aa4ee4181d7e2 not found: ID does not exist" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.093576 4787 scope.go:117] "RemoveContainer" containerID="03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11" Jan 26 19:30:19 crc kubenswrapper[4787]: E0126 19:30:19.093879 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11\": container with ID starting with 03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11 not found: ID does not exist" containerID="03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.093902 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11"} err="failed to get container status \"03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11\": rpc error: code = NotFound desc = could not find container \"03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11\": container with ID starting with 03b6136ace4420fcd63c6ccf2b4497b943c253e455b21677a40c058224674c11 not found: ID does not exist" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.093915 4787 scope.go:117] "RemoveContainer" containerID="f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83" Jan 26 19:30:19 crc kubenswrapper[4787]: E0126 19:30:19.094214 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83\": container with ID starting with f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83 not found: ID does not exist" containerID="f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.094233 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83"} err="failed to get container status \"f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83\": rpc error: code = NotFound desc = could not find container \"f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83\": container with ID starting with f2f64831f8f53ae5046f10e68febe02e4ca3cf3f26dd9e835d121b4f213deb83 not found: ID does not exist" Jan 26 19:30:19 crc kubenswrapper[4787]: I0126 19:30:19.600701 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" path="/var/lib/kubelet/pods/bf27858f-252c-4819-a189-e39eba30f1cc/volumes" Jan 26 19:30:37 crc kubenswrapper[4787]: I0126 19:30:37.046426 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-d6t84"] Jan 26 19:30:37 crc kubenswrapper[4787]: I0126 19:30:37.058986 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-d6t84"] Jan 26 19:30:37 crc kubenswrapper[4787]: I0126 19:30:37.602373 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889b0a68-86e3-4211-bbb8-40903d0bd246" path="/var/lib/kubelet/pods/889b0a68-86e3-4211-bbb8-40903d0bd246/volumes" Jan 26 19:30:39 crc kubenswrapper[4787]: I0126 19:30:39.028292 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-7430-account-create-update-6xfvh"] Jan 26 19:30:39 crc kubenswrapper[4787]: I0126 19:30:39.042870 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-7430-account-create-update-6xfvh"] Jan 26 19:30:39 crc kubenswrapper[4787]: I0126 19:30:39.602355 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63118e30-84e6-489f-aaeb-bfec57775e4b" path="/var/lib/kubelet/pods/63118e30-84e6-489f-aaeb-bfec57775e4b/volumes" Jan 26 19:30:45 crc kubenswrapper[4787]: I0126 19:30:45.037023 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-87b7d"] Jan 26 19:30:45 crc kubenswrapper[4787]: I0126 19:30:45.045525 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-87b7d"] Jan 26 19:30:45 crc kubenswrapper[4787]: I0126 19:30:45.604127 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7cb052f-a346-4b9e-82bd-4cb6bcef7563" path="/var/lib/kubelet/pods/b7cb052f-a346-4b9e-82bd-4cb6bcef7563/volumes" Jan 26 19:30:46 crc kubenswrapper[4787]: I0126 19:30:46.035932 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-0e5d-account-create-update-df54g"] Jan 26 19:30:46 crc kubenswrapper[4787]: I0126 19:30:46.051670 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-0e5d-account-create-update-df54g"] Jan 26 19:30:47 crc kubenswrapper[4787]: I0126 19:30:47.601611 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebee04d5-772e-4f11-9ca5-a24268283f6d" path="/var/lib/kubelet/pods/ebee04d5-772e-4f11-9ca5-a24268283f6d/volumes" Jan 26 19:31:02 crc kubenswrapper[4787]: I0126 19:31:02.414290 4787 scope.go:117] "RemoveContainer" containerID="df048c0b0c75be9019fc3a7efebbd4a9c72355b8caf02beac89570786b591688" Jan 26 19:31:02 crc kubenswrapper[4787]: I0126 19:31:02.443920 4787 scope.go:117] "RemoveContainer" containerID="3085b2e02b754b94e8742f65729e6947d0a483e51b3c26166bb28c84e18f5c9c" Jan 26 19:31:02 crc kubenswrapper[4787]: I0126 19:31:02.494896 4787 scope.go:117] "RemoveContainer" containerID="50c1a40a4f70cd4ab6e5188ae60bcdef27b508eeb72d96a34b4b181600ceef08" Jan 26 19:31:02 crc kubenswrapper[4787]: I0126 19:31:02.553185 4787 scope.go:117] "RemoveContainer" containerID="de3c40be93447f2219c999d3828317ed1025bd1cbf3866de85c16e8f23be9bce" Jan 26 19:31:02 crc kubenswrapper[4787]: I0126 19:31:02.605044 4787 scope.go:117] "RemoveContainer" containerID="b8218d050cde4348df551cdb4999fce44cd075df785e6d95ae0fa0e0a459ed3a" Jan 26 19:31:16 crc kubenswrapper[4787]: I0126 19:31:16.808179 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:31:16 crc kubenswrapper[4787]: I0126 19:31:16.808748 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:31:40 crc kubenswrapper[4787]: I0126 19:31:40.043844 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-s566r"] Jan 26 19:31:40 crc kubenswrapper[4787]: I0126 19:31:40.052759 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-s566r"] Jan 26 19:31:41 crc kubenswrapper[4787]: I0126 19:31:41.604882 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e153ee6-c310-414d-973f-13e5cd1b936c" path="/var/lib/kubelet/pods/3e153ee6-c310-414d-973f-13e5cd1b936c/volumes" Jan 26 19:31:46 crc kubenswrapper[4787]: I0126 19:31:46.808145 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:31:46 crc kubenswrapper[4787]: I0126 19:31:46.809060 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:32:02 crc kubenswrapper[4787]: I0126 19:32:02.778053 4787 scope.go:117] "RemoveContainer" containerID="36bd33021285315efbada5121525354e78ae4627f10b0413943aed5bf95820da" Jan 26 19:32:02 crc kubenswrapper[4787]: I0126 19:32:02.822231 4787 scope.go:117] "RemoveContainer" containerID="eb4008169cc31a07489b4f1ec9bd5ff5d08cb7cb48614378d87d11d4af287ba3" Jan 26 19:32:16 crc kubenswrapper[4787]: I0126 19:32:16.808232 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:32:16 crc kubenswrapper[4787]: I0126 19:32:16.808753 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:32:16 crc kubenswrapper[4787]: I0126 19:32:16.808794 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:32:16 crc kubenswrapper[4787]: I0126 19:32:16.809573 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25ad5d5ab6da5a26ef32edfd02a6c7f57994a38dadf7f56c7c0ef086287b79ae"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:32:16 crc kubenswrapper[4787]: I0126 19:32:16.809624 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://25ad5d5ab6da5a26ef32edfd02a6c7f57994a38dadf7f56c7c0ef086287b79ae" gracePeriod=600 Jan 26 19:32:17 crc kubenswrapper[4787]: I0126 19:32:17.156088 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="25ad5d5ab6da5a26ef32edfd02a6c7f57994a38dadf7f56c7c0ef086287b79ae" exitCode=0 Jan 26 19:32:17 crc kubenswrapper[4787]: I0126 19:32:17.156299 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"25ad5d5ab6da5a26ef32edfd02a6c7f57994a38dadf7f56c7c0ef086287b79ae"} Jan 26 19:32:17 crc kubenswrapper[4787]: I0126 19:32:17.156439 4787 scope.go:117] "RemoveContainer" containerID="db05500c613dd4a5ddba7f2ce858cd763a10587ce411d2da07595f7d5992f7c3" Jan 26 19:32:18 crc kubenswrapper[4787]: I0126 19:32:18.170569 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb"} Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.088377 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shhs6"] Jan 26 19:32:54 crc kubenswrapper[4787]: E0126 19:32:54.090263 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="registry-server" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.090288 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="registry-server" Jan 26 19:32:54 crc kubenswrapper[4787]: E0126 19:32:54.090352 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="extract-utilities" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.090366 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="extract-utilities" Jan 26 19:32:54 crc kubenswrapper[4787]: E0126 19:32:54.090385 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="extract-content" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.090397 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="extract-content" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.090713 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf27858f-252c-4819-a189-e39eba30f1cc" containerName="registry-server" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.093198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.107053 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shhs6"] Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.129181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-utilities\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.129277 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-catalog-content\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.129508 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjfv\" (UniqueName: \"kubernetes.io/projected/02ab9dce-0aac-49a3-934d-6f7f9b167018-kube-api-access-fhjfv\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.231924 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjfv\" (UniqueName: \"kubernetes.io/projected/02ab9dce-0aac-49a3-934d-6f7f9b167018-kube-api-access-fhjfv\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.232090 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-utilities\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.232134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-catalog-content\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.232544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-utilities\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.232858 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-catalog-content\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.258911 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjfv\" (UniqueName: \"kubernetes.io/projected/02ab9dce-0aac-49a3-934d-6f7f9b167018-kube-api-access-fhjfv\") pod \"redhat-operators-shhs6\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.417889 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:32:54 crc kubenswrapper[4787]: I0126 19:32:54.897356 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shhs6"] Jan 26 19:32:55 crc kubenswrapper[4787]: I0126 19:32:55.556989 4787 generic.go:334] "Generic (PLEG): container finished" podID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerID="6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973" exitCode=0 Jan 26 19:32:55 crc kubenswrapper[4787]: I0126 19:32:55.557169 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerDied","Data":"6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973"} Jan 26 19:32:55 crc kubenswrapper[4787]: I0126 19:32:55.557334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerStarted","Data":"f5be25f8ff84dc174b6a61f7aba3676001822df0d0a045107a31e571d54c6182"} Jan 26 19:32:56 crc kubenswrapper[4787]: I0126 19:32:56.573361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerStarted","Data":"4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2"} Jan 26 19:33:00 crc kubenswrapper[4787]: I0126 19:33:00.610452 4787 generic.go:334] "Generic (PLEG): container finished" podID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerID="4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2" exitCode=0 Jan 26 19:33:00 crc kubenswrapper[4787]: I0126 19:33:00.610537 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerDied","Data":"4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2"} Jan 26 19:33:01 crc kubenswrapper[4787]: I0126 19:33:01.624190 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerStarted","Data":"33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e"} Jan 26 19:33:01 crc kubenswrapper[4787]: I0126 19:33:01.650475 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shhs6" podStartSLOduration=2.028499757 podStartE2EDuration="7.650453355s" podCreationTimestamp="2026-01-26 19:32:54 +0000 UTC" firstStartedPulling="2026-01-26 19:32:55.55914927 +0000 UTC m=+6544.266285403" lastFinishedPulling="2026-01-26 19:33:01.181102868 +0000 UTC m=+6549.888239001" observedRunningTime="2026-01-26 19:33:01.644482859 +0000 UTC m=+6550.351618992" watchObservedRunningTime="2026-01-26 19:33:01.650453355 +0000 UTC m=+6550.357589478" Jan 26 19:33:02 crc kubenswrapper[4787]: I0126 19:33:02.908963 4787 scope.go:117] "RemoveContainer" containerID="5ad34f823079cc009fa38293deb891684ab5a58aae819d6877e75cab51a0bb89" Jan 26 19:33:02 crc kubenswrapper[4787]: I0126 19:33:02.951033 4787 scope.go:117] "RemoveContainer" containerID="98ecdaa65df4414a3b25228999b43661412c45c4805375ab31b23d9a43c55706" Jan 26 19:33:02 crc kubenswrapper[4787]: I0126 19:33:02.976765 4787 scope.go:117] "RemoveContainer" containerID="e36ead0e47484664d688bcc1425b351bf809ed0f49058dbd9361ca0e6efb1040" Jan 26 19:33:04 crc kubenswrapper[4787]: I0126 19:33:04.418961 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:33:04 crc kubenswrapper[4787]: I0126 19:33:04.419020 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:33:05 crc kubenswrapper[4787]: I0126 19:33:05.471160 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shhs6" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="registry-server" probeResult="failure" output=< Jan 26 19:33:05 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 19:33:05 crc kubenswrapper[4787]: > Jan 26 19:33:14 crc kubenswrapper[4787]: I0126 19:33:14.484338 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:33:14 crc kubenswrapper[4787]: I0126 19:33:14.543255 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:33:14 crc kubenswrapper[4787]: I0126 19:33:14.727238 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shhs6"] Jan 26 19:33:15 crc kubenswrapper[4787]: I0126 19:33:15.766056 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-shhs6" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="registry-server" containerID="cri-o://33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e" gracePeriod=2 Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.319913 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.477193 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhjfv\" (UniqueName: \"kubernetes.io/projected/02ab9dce-0aac-49a3-934d-6f7f9b167018-kube-api-access-fhjfv\") pod \"02ab9dce-0aac-49a3-934d-6f7f9b167018\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.477347 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-utilities\") pod \"02ab9dce-0aac-49a3-934d-6f7f9b167018\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.477407 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-catalog-content\") pod \"02ab9dce-0aac-49a3-934d-6f7f9b167018\" (UID: \"02ab9dce-0aac-49a3-934d-6f7f9b167018\") " Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.478403 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-utilities" (OuterVolumeSpecName: "utilities") pod "02ab9dce-0aac-49a3-934d-6f7f9b167018" (UID: "02ab9dce-0aac-49a3-934d-6f7f9b167018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.483913 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ab9dce-0aac-49a3-934d-6f7f9b167018-kube-api-access-fhjfv" (OuterVolumeSpecName: "kube-api-access-fhjfv") pod "02ab9dce-0aac-49a3-934d-6f7f9b167018" (UID: "02ab9dce-0aac-49a3-934d-6f7f9b167018"). InnerVolumeSpecName "kube-api-access-fhjfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.580399 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhjfv\" (UniqueName: \"kubernetes.io/projected/02ab9dce-0aac-49a3-934d-6f7f9b167018-kube-api-access-fhjfv\") on node \"crc\" DevicePath \"\"" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.580684 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.594208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02ab9dce-0aac-49a3-934d-6f7f9b167018" (UID: "02ab9dce-0aac-49a3-934d-6f7f9b167018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.683841 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ab9dce-0aac-49a3-934d-6f7f9b167018-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.779728 4787 generic.go:334] "Generic (PLEG): container finished" podID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerID="33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e" exitCode=0 Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.779774 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerDied","Data":"33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e"} Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.779790 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shhs6" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.779816 4787 scope.go:117] "RemoveContainer" containerID="33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.779805 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shhs6" event={"ID":"02ab9dce-0aac-49a3-934d-6f7f9b167018","Type":"ContainerDied","Data":"f5be25f8ff84dc174b6a61f7aba3676001822df0d0a045107a31e571d54c6182"} Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.808174 4787 scope.go:117] "RemoveContainer" containerID="4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.819145 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shhs6"] Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.828220 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shhs6"] Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.841659 4787 scope.go:117] "RemoveContainer" containerID="6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.873036 4787 scope.go:117] "RemoveContainer" containerID="33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e" Jan 26 19:33:16 crc kubenswrapper[4787]: E0126 19:33:16.873419 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e\": container with ID starting with 33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e not found: ID does not exist" containerID="33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.873465 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e"} err="failed to get container status \"33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e\": rpc error: code = NotFound desc = could not find container \"33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e\": container with ID starting with 33381a71cdc0630491de6f3b0169528d26a799bc055e11e4878f99e67d91506e not found: ID does not exist" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.873493 4787 scope.go:117] "RemoveContainer" containerID="4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2" Jan 26 19:33:16 crc kubenswrapper[4787]: E0126 19:33:16.873696 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2\": container with ID starting with 4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2 not found: ID does not exist" containerID="4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.873724 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2"} err="failed to get container status \"4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2\": rpc error: code = NotFound desc = could not find container \"4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2\": container with ID starting with 4d4c2b1f2155a77efe9590fef77ed52be5fc927e0d46193efa0103c6055472e2 not found: ID does not exist" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.873742 4787 scope.go:117] "RemoveContainer" containerID="6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973" Jan 26 19:33:16 crc kubenswrapper[4787]: E0126 19:33:16.873928 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973\": container with ID starting with 6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973 not found: ID does not exist" containerID="6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973" Jan 26 19:33:16 crc kubenswrapper[4787]: I0126 19:33:16.873975 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973"} err="failed to get container status \"6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973\": rpc error: code = NotFound desc = could not find container \"6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973\": container with ID starting with 6c12f79463b8e8c58f3ebb40c1bc4c5f85fc73b35dc4fbfaa52bccdc8b2d7973 not found: ID does not exist" Jan 26 19:33:17 crc kubenswrapper[4787]: I0126 19:33:17.601889 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" path="/var/lib/kubelet/pods/02ab9dce-0aac-49a3-934d-6f7f9b167018/volumes" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.438716 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lp79f"] Jan 26 19:33:32 crc kubenswrapper[4787]: E0126 19:33:32.439894 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="registry-server" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.439913 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="registry-server" Jan 26 19:33:32 crc kubenswrapper[4787]: E0126 19:33:32.439971 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="extract-utilities" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.439980 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="extract-utilities" Jan 26 19:33:32 crc kubenswrapper[4787]: E0126 19:33:32.439991 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="extract-content" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.440001 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="extract-content" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.440273 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ab9dce-0aac-49a3-934d-6f7f9b167018" containerName="registry-server" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.442402 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.451635 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp79f"] Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.533814 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-utilities\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.534514 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-catalog-content\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.534652 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tfz\" (UniqueName: \"kubernetes.io/projected/2b7af8dd-cf98-416f-a5f1-76f8af18b438-kube-api-access-x9tfz\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.636372 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-catalog-content\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.636485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9tfz\" (UniqueName: \"kubernetes.io/projected/2b7af8dd-cf98-416f-a5f1-76f8af18b438-kube-api-access-x9tfz\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.636747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-catalog-content\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.637255 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-utilities\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.637639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-utilities\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.658188 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9tfz\" (UniqueName: \"kubernetes.io/projected/2b7af8dd-cf98-416f-a5f1-76f8af18b438-kube-api-access-x9tfz\") pod \"redhat-marketplace-lp79f\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:32 crc kubenswrapper[4787]: I0126 19:33:32.763748 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:33 crc kubenswrapper[4787]: I0126 19:33:33.219759 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp79f"] Jan 26 19:33:33 crc kubenswrapper[4787]: I0126 19:33:33.982955 4787 generic.go:334] "Generic (PLEG): container finished" podID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerID="cb2f1247a7a8a83941c24569c2c39c2fcfd040c986244d2943f144737a1394ed" exitCode=0 Jan 26 19:33:33 crc kubenswrapper[4787]: I0126 19:33:33.983232 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerDied","Data":"cb2f1247a7a8a83941c24569c2c39c2fcfd040c986244d2943f144737a1394ed"} Jan 26 19:33:33 crc kubenswrapper[4787]: I0126 19:33:33.983304 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerStarted","Data":"2c27a612fbb987124edb5019831465af786fb45202d2ab1a1a8fbf5444e64a7b"} Jan 26 19:33:34 crc kubenswrapper[4787]: I0126 19:33:34.997577 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerStarted","Data":"1ffa7d364b7a9dd4c2656294370781768ff1e04c4c236edc510be879c609ce7d"} Jan 26 19:33:36 crc kubenswrapper[4787]: I0126 19:33:36.009667 4787 generic.go:334] "Generic (PLEG): container finished" podID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerID="1ffa7d364b7a9dd4c2656294370781768ff1e04c4c236edc510be879c609ce7d" exitCode=0 Jan 26 19:33:36 crc kubenswrapper[4787]: I0126 19:33:36.010065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerDied","Data":"1ffa7d364b7a9dd4c2656294370781768ff1e04c4c236edc510be879c609ce7d"} Jan 26 19:33:37 crc kubenswrapper[4787]: I0126 19:33:37.024176 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerStarted","Data":"98f8ba2b7c9961256ab9dad0400227a8a5ba887d3cb9d427d65d9755f77fc316"} Jan 26 19:33:37 crc kubenswrapper[4787]: I0126 19:33:37.047270 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lp79f" podStartSLOduration=2.491282173 podStartE2EDuration="5.047245345s" podCreationTimestamp="2026-01-26 19:33:32 +0000 UTC" firstStartedPulling="2026-01-26 19:33:33.986134089 +0000 UTC m=+6582.693270222" lastFinishedPulling="2026-01-26 19:33:36.542097231 +0000 UTC m=+6585.249233394" observedRunningTime="2026-01-26 19:33:37.042243333 +0000 UTC m=+6585.749379466" watchObservedRunningTime="2026-01-26 19:33:37.047245345 +0000 UTC m=+6585.754381478" Jan 26 19:33:42 crc kubenswrapper[4787]: I0126 19:33:42.764494 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:42 crc kubenswrapper[4787]: I0126 19:33:42.765860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:42 crc kubenswrapper[4787]: I0126 19:33:42.816912 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:43 crc kubenswrapper[4787]: I0126 19:33:43.144739 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:45 crc kubenswrapper[4787]: I0126 19:33:45.613105 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp79f"] Jan 26 19:33:46 crc kubenswrapper[4787]: I0126 19:33:46.108683 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lp79f" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="registry-server" containerID="cri-o://98f8ba2b7c9961256ab9dad0400227a8a5ba887d3cb9d427d65d9755f77fc316" gracePeriod=2 Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.121395 4787 generic.go:334] "Generic (PLEG): container finished" podID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerID="98f8ba2b7c9961256ab9dad0400227a8a5ba887d3cb9d427d65d9755f77fc316" exitCode=0 Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.121462 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerDied","Data":"98f8ba2b7c9961256ab9dad0400227a8a5ba887d3cb9d427d65d9755f77fc316"} Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.122117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp79f" event={"ID":"2b7af8dd-cf98-416f-a5f1-76f8af18b438","Type":"ContainerDied","Data":"2c27a612fbb987124edb5019831465af786fb45202d2ab1a1a8fbf5444e64a7b"} Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.122137 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c27a612fbb987124edb5019831465af786fb45202d2ab1a1a8fbf5444e64a7b" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.169824 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.293418 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-utilities\") pod \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.293676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-catalog-content\") pod \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.293763 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9tfz\" (UniqueName: \"kubernetes.io/projected/2b7af8dd-cf98-416f-a5f1-76f8af18b438-kube-api-access-x9tfz\") pod \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\" (UID: \"2b7af8dd-cf98-416f-a5f1-76f8af18b438\") " Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.294916 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-utilities" (OuterVolumeSpecName: "utilities") pod "2b7af8dd-cf98-416f-a5f1-76f8af18b438" (UID: "2b7af8dd-cf98-416f-a5f1-76f8af18b438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.298876 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7af8dd-cf98-416f-a5f1-76f8af18b438-kube-api-access-x9tfz" (OuterVolumeSpecName: "kube-api-access-x9tfz") pod "2b7af8dd-cf98-416f-a5f1-76f8af18b438" (UID: "2b7af8dd-cf98-416f-a5f1-76f8af18b438"). InnerVolumeSpecName "kube-api-access-x9tfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.315864 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b7af8dd-cf98-416f-a5f1-76f8af18b438" (UID: "2b7af8dd-cf98-416f-a5f1-76f8af18b438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.396206 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.396269 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7af8dd-cf98-416f-a5f1-76f8af18b438-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:33:47 crc kubenswrapper[4787]: I0126 19:33:47.396285 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9tfz\" (UniqueName: \"kubernetes.io/projected/2b7af8dd-cf98-416f-a5f1-76f8af18b438-kube-api-access-x9tfz\") on node \"crc\" DevicePath \"\"" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.130055 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp79f" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.154716 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp79f"] Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.164281 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp79f"] Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.218511 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9mtdl"] Jan 26 19:33:48 crc kubenswrapper[4787]: E0126 19:33:48.218937 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="extract-utilities" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.218973 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="extract-utilities" Jan 26 19:33:48 crc kubenswrapper[4787]: E0126 19:33:48.219000 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="extract-content" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.219009 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="extract-content" Jan 26 19:33:48 crc kubenswrapper[4787]: E0126 19:33:48.219023 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="registry-server" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.219030 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="registry-server" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.219290 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" containerName="registry-server" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.221134 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.242577 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mtdl"] Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.314243 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-catalog-content\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.314598 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-utilities\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.314690 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxw7k\" (UniqueName: \"kubernetes.io/projected/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-kube-api-access-xxw7k\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.416624 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-catalog-content\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.417120 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-utilities\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.417060 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-catalog-content\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.417340 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-utilities\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.417490 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxw7k\" (UniqueName: \"kubernetes.io/projected/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-kube-api-access-xxw7k\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.437600 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxw7k\" (UniqueName: \"kubernetes.io/projected/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-kube-api-access-xxw7k\") pod \"community-operators-9mtdl\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:48 crc kubenswrapper[4787]: I0126 19:33:48.543222 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:49 crc kubenswrapper[4787]: I0126 19:33:49.093440 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mtdl"] Jan 26 19:33:49 crc kubenswrapper[4787]: I0126 19:33:49.140317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mtdl" event={"ID":"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86","Type":"ContainerStarted","Data":"4e1172c81be0a7da7c891f25a93e41079c98941046b1fbdb5542757f41a783d4"} Jan 26 19:33:49 crc kubenswrapper[4787]: I0126 19:33:49.602477 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7af8dd-cf98-416f-a5f1-76f8af18b438" path="/var/lib/kubelet/pods/2b7af8dd-cf98-416f-a5f1-76f8af18b438/volumes" Jan 26 19:33:50 crc kubenswrapper[4787]: I0126 19:33:50.155090 4787 generic.go:334] "Generic (PLEG): container finished" podID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerID="8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550" exitCode=0 Jan 26 19:33:50 crc kubenswrapper[4787]: I0126 19:33:50.155466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mtdl" event={"ID":"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86","Type":"ContainerDied","Data":"8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550"} Jan 26 19:33:52 crc kubenswrapper[4787]: I0126 19:33:52.173287 4787 generic.go:334] "Generic (PLEG): container finished" podID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerID="cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5" exitCode=0 Jan 26 19:33:52 crc kubenswrapper[4787]: I0126 19:33:52.173368 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mtdl" event={"ID":"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86","Type":"ContainerDied","Data":"cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5"} Jan 26 19:33:53 crc kubenswrapper[4787]: I0126 19:33:53.185548 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mtdl" event={"ID":"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86","Type":"ContainerStarted","Data":"47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb"} Jan 26 19:33:58 crc kubenswrapper[4787]: I0126 19:33:58.544499 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:58 crc kubenswrapper[4787]: I0126 19:33:58.545092 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:58 crc kubenswrapper[4787]: I0126 19:33:58.589773 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:58 crc kubenswrapper[4787]: I0126 19:33:58.614321 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9mtdl" podStartSLOduration=8.183964076 podStartE2EDuration="10.614302188s" podCreationTimestamp="2026-01-26 19:33:48 +0000 UTC" firstStartedPulling="2026-01-26 19:33:50.15753259 +0000 UTC m=+6598.864668753" lastFinishedPulling="2026-01-26 19:33:52.587870732 +0000 UTC m=+6601.295006865" observedRunningTime="2026-01-26 19:33:53.203480394 +0000 UTC m=+6601.910616537" watchObservedRunningTime="2026-01-26 19:33:58.614302188 +0000 UTC m=+6607.321438321" Jan 26 19:33:59 crc kubenswrapper[4787]: I0126 19:33:59.302783 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:33:59 crc kubenswrapper[4787]: I0126 19:33:59.356390 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mtdl"] Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.266892 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9mtdl" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="registry-server" containerID="cri-o://47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb" gracePeriod=2 Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.783850 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.949372 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-catalog-content\") pod \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.949523 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-utilities\") pod \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.949615 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxw7k\" (UniqueName: \"kubernetes.io/projected/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-kube-api-access-xxw7k\") pod \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\" (UID: \"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86\") " Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.951065 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-utilities" (OuterVolumeSpecName: "utilities") pod "8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" (UID: "8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:34:01 crc kubenswrapper[4787]: I0126 19:34:01.957470 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-kube-api-access-xxw7k" (OuterVolumeSpecName: "kube-api-access-xxw7k") pod "8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" (UID: "8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86"). InnerVolumeSpecName "kube-api-access-xxw7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.052449 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.052676 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxw7k\" (UniqueName: \"kubernetes.io/projected/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-kube-api-access-xxw7k\") on node \"crc\" DevicePath \"\"" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.178711 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" (UID: "8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.259875 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.285829 4787 generic.go:334] "Generic (PLEG): container finished" podID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerID="47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb" exitCode=0 Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.285914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mtdl" event={"ID":"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86","Type":"ContainerDied","Data":"47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb"} Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.285966 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mtdl" event={"ID":"8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86","Type":"ContainerDied","Data":"4e1172c81be0a7da7c891f25a93e41079c98941046b1fbdb5542757f41a783d4"} Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.286148 4787 scope.go:117] "RemoveContainer" containerID="47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.286172 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mtdl" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.316501 4787 scope.go:117] "RemoveContainer" containerID="cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.323726 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mtdl"] Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.336290 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9mtdl"] Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.343765 4787 scope.go:117] "RemoveContainer" containerID="8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.383268 4787 scope.go:117] "RemoveContainer" containerID="47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb" Jan 26 19:34:02 crc kubenswrapper[4787]: E0126 19:34:02.383662 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb\": container with ID starting with 47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb not found: ID does not exist" containerID="47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.383693 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb"} err="failed to get container status \"47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb\": rpc error: code = NotFound desc = could not find container \"47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb\": container with ID starting with 47bcb890645067d8d07ef1f61027f0abef122929352e13a1bb7aa504f6b2d2eb not found: ID does not exist" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.383714 4787 scope.go:117] "RemoveContainer" containerID="cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5" Jan 26 19:34:02 crc kubenswrapper[4787]: E0126 19:34:02.383958 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5\": container with ID starting with cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5 not found: ID does not exist" containerID="cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.383990 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5"} err="failed to get container status \"cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5\": rpc error: code = NotFound desc = could not find container \"cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5\": container with ID starting with cca3d4f191b4927dba392dc1fd2d23fd80c0b2f19a9106ccedc4ea5d2797f6d5 not found: ID does not exist" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.384008 4787 scope.go:117] "RemoveContainer" containerID="8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550" Jan 26 19:34:02 crc kubenswrapper[4787]: E0126 19:34:02.384376 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550\": container with ID starting with 8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550 not found: ID does not exist" containerID="8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550" Jan 26 19:34:02 crc kubenswrapper[4787]: I0126 19:34:02.384402 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550"} err="failed to get container status \"8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550\": rpc error: code = NotFound desc = could not find container \"8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550\": container with ID starting with 8ad5df84458e90b7cfa3530ce08f82604f624dc2b277a1eb4acbe8fe464a9550 not found: ID does not exist" Jan 26 19:34:03 crc kubenswrapper[4787]: I0126 19:34:03.016209 4787 scope.go:117] "RemoveContainer" containerID="00d07112b9eb3d017d543e54ef50a093729a76d55831afdfa3ff4dbc68cbec97" Jan 26 19:34:03 crc kubenswrapper[4787]: I0126 19:34:03.603604 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" path="/var/lib/kubelet/pods/8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86/volumes" Jan 26 19:34:26 crc kubenswrapper[4787]: I0126 19:34:26.049806 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8227-account-create-update-5tdpx"] Jan 26 19:34:26 crc kubenswrapper[4787]: I0126 19:34:26.066687 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8227-account-create-update-5tdpx"] Jan 26 19:34:26 crc kubenswrapper[4787]: I0126 19:34:26.078230 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-6lpbc"] Jan 26 19:34:26 crc kubenswrapper[4787]: I0126 19:34:26.097750 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-6lpbc"] Jan 26 19:34:27 crc kubenswrapper[4787]: I0126 19:34:27.605789 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973b6e22-3478-4638-9023-474565f3a8fb" path="/var/lib/kubelet/pods/973b6e22-3478-4638-9023-474565f3a8fb/volumes" Jan 26 19:34:27 crc kubenswrapper[4787]: I0126 19:34:27.607333 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3dae570-08ed-49e2-a450-42d8b512a701" path="/var/lib/kubelet/pods/f3dae570-08ed-49e2-a450-42d8b512a701/volumes" Jan 26 19:34:41 crc kubenswrapper[4787]: I0126 19:34:41.053241 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-cxp2v"] Jan 26 19:34:41 crc kubenswrapper[4787]: I0126 19:34:41.065389 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-cxp2v"] Jan 26 19:34:41 crc kubenswrapper[4787]: I0126 19:34:41.602552 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3448f12c-52a4-47a3-8247-084746b6c32f" path="/var/lib/kubelet/pods/3448f12c-52a4-47a3-8247-084746b6c32f/volumes" Jan 26 19:34:46 crc kubenswrapper[4787]: I0126 19:34:46.808418 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:34:46 crc kubenswrapper[4787]: I0126 19:34:46.809123 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:35:03 crc kubenswrapper[4787]: I0126 19:35:03.107123 4787 scope.go:117] "RemoveContainer" containerID="50aefd6818856d2225e5d6f1f1e563b0fe28550e83c66f8383a55461bbc5f238" Jan 26 19:35:03 crc kubenswrapper[4787]: I0126 19:35:03.136235 4787 scope.go:117] "RemoveContainer" containerID="134f564ff61c7e7e7600bf0d35c54b5edecfe5f5ed1664b4109f3b0e2c013db9" Jan 26 19:35:03 crc kubenswrapper[4787]: I0126 19:35:03.180741 4787 scope.go:117] "RemoveContainer" containerID="1b722a02ad4b662a99923569cb7fa32c8cf9096aad7ee1cf0b31cf21dd266139" Jan 26 19:35:16 crc kubenswrapper[4787]: I0126 19:35:16.807784 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:35:16 crc kubenswrapper[4787]: I0126 19:35:16.809174 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:35:46 crc kubenswrapper[4787]: I0126 19:35:46.807533 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:35:46 crc kubenswrapper[4787]: I0126 19:35:46.808175 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:35:46 crc kubenswrapper[4787]: I0126 19:35:46.808244 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:35:46 crc kubenswrapper[4787]: I0126 19:35:46.809275 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:35:46 crc kubenswrapper[4787]: I0126 19:35:46.809352 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" gracePeriod=600 Jan 26 19:35:46 crc kubenswrapper[4787]: E0126 19:35:46.936908 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:35:47 crc kubenswrapper[4787]: I0126 19:35:47.320736 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" exitCode=0 Jan 26 19:35:47 crc kubenswrapper[4787]: I0126 19:35:47.320785 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb"} Jan 26 19:35:47 crc kubenswrapper[4787]: I0126 19:35:47.320828 4787 scope.go:117] "RemoveContainer" containerID="25ad5d5ab6da5a26ef32edfd02a6c7f57994a38dadf7f56c7c0ef086287b79ae" Jan 26 19:35:47 crc kubenswrapper[4787]: I0126 19:35:47.321488 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:35:47 crc kubenswrapper[4787]: E0126 19:35:47.321757 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:35:59 crc kubenswrapper[4787]: I0126 19:35:59.590192 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:35:59 crc kubenswrapper[4787]: E0126 19:35:59.591083 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:36:13 crc kubenswrapper[4787]: I0126 19:36:13.590004 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:36:13 crc kubenswrapper[4787]: E0126 19:36:13.590806 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:36:27 crc kubenswrapper[4787]: I0126 19:36:27.589479 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:36:27 crc kubenswrapper[4787]: E0126 19:36:27.590309 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:36:40 crc kubenswrapper[4787]: I0126 19:36:40.590606 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:36:40 crc kubenswrapper[4787]: E0126 19:36:40.591498 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:36:54 crc kubenswrapper[4787]: I0126 19:36:54.589092 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:36:54 crc kubenswrapper[4787]: E0126 19:36:54.589985 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:37:07 crc kubenswrapper[4787]: I0126 19:37:07.589524 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:37:07 crc kubenswrapper[4787]: E0126 19:37:07.590361 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:37:15 crc kubenswrapper[4787]: I0126 19:37:15.055842 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1544-account-create-update-mbjcv"] Jan 26 19:37:15 crc kubenswrapper[4787]: I0126 19:37:15.068144 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1544-account-create-update-mbjcv"] Jan 26 19:37:15 crc kubenswrapper[4787]: I0126 19:37:15.603678 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f18e5d1-19e3-4c9e-883a-56e294d936a1" path="/var/lib/kubelet/pods/1f18e5d1-19e3-4c9e-883a-56e294d936a1/volumes" Jan 26 19:37:16 crc kubenswrapper[4787]: I0126 19:37:16.073328 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-xsl7r"] Jan 26 19:37:16 crc kubenswrapper[4787]: I0126 19:37:16.100753 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-xsl7r"] Jan 26 19:37:17 crc kubenswrapper[4787]: I0126 19:37:17.606791 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3adf2c19-c9c4-4d1e-a577-b511f4b22e93" path="/var/lib/kubelet/pods/3adf2c19-c9c4-4d1e-a577-b511f4b22e93/volumes" Jan 26 19:37:18 crc kubenswrapper[4787]: I0126 19:37:18.589819 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:37:18 crc kubenswrapper[4787]: E0126 19:37:18.590372 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:37:29 crc kubenswrapper[4787]: I0126 19:37:29.589134 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:37:29 crc kubenswrapper[4787]: E0126 19:37:29.589816 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:37:31 crc kubenswrapper[4787]: I0126 19:37:31.030009 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-ftc88"] Jan 26 19:37:31 crc kubenswrapper[4787]: I0126 19:37:31.055785 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-ftc88"] Jan 26 19:37:31 crc kubenswrapper[4787]: I0126 19:37:31.603104 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2ed31f-039f-490e-8a26-89cd41005449" path="/var/lib/kubelet/pods/8a2ed31f-039f-490e-8a26-89cd41005449/volumes" Jan 26 19:37:40 crc kubenswrapper[4787]: I0126 19:37:40.589766 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:37:40 crc kubenswrapper[4787]: E0126 19:37:40.592025 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:37:51 crc kubenswrapper[4787]: I0126 19:37:51.598672 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:37:51 crc kubenswrapper[4787]: E0126 19:37:51.599558 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:37:53 crc kubenswrapper[4787]: I0126 19:37:53.060018 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-gt2wn"] Jan 26 19:37:53 crc kubenswrapper[4787]: I0126 19:37:53.073924 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-3fc2-account-create-update-5qwlg"] Jan 26 19:37:53 crc kubenswrapper[4787]: I0126 19:37:53.086600 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-gt2wn"] Jan 26 19:37:53 crc kubenswrapper[4787]: I0126 19:37:53.097633 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-3fc2-account-create-update-5qwlg"] Jan 26 19:37:53 crc kubenswrapper[4787]: I0126 19:37:53.607254 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665dec65-293e-4768-9b29-4fd9ace8bd56" path="/var/lib/kubelet/pods/665dec65-293e-4768-9b29-4fd9ace8bd56/volumes" Jan 26 19:37:53 crc kubenswrapper[4787]: I0126 19:37:53.608759 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e300fc33-07d6-49b7-8549-7ac0fd513183" path="/var/lib/kubelet/pods/e300fc33-07d6-49b7-8549-7ac0fd513183/volumes" Jan 26 19:38:03 crc kubenswrapper[4787]: I0126 19:38:03.335063 4787 scope.go:117] "RemoveContainer" containerID="dc29d76d2aa4971b96fb3adbb941397cc4d671a04e13dde280359ba5393b39da" Jan 26 19:38:03 crc kubenswrapper[4787]: I0126 19:38:03.375496 4787 scope.go:117] "RemoveContainer" containerID="9a8ddf99d25ee0a60821b6bcb54621cf39f8fc8fe119c0f51d784e5e4ecfed0c" Jan 26 19:38:03 crc kubenswrapper[4787]: I0126 19:38:03.404736 4787 scope.go:117] "RemoveContainer" containerID="7907e97ba640a87b556e93bf750a7283943fe08d8df2b82639a90b21cdeb902c" Jan 26 19:38:03 crc kubenswrapper[4787]: I0126 19:38:03.446267 4787 scope.go:117] "RemoveContainer" containerID="d55a50548e656aa9b3e74b427a3adf65e2b4c7b03a76247ea5fdb032eb275d7b" Jan 26 19:38:03 crc kubenswrapper[4787]: I0126 19:38:03.491148 4787 scope.go:117] "RemoveContainer" containerID="feea5faf4f93742029b88fa6362f9e5fed94a9d677137f62011a5a099bdd4e24" Jan 26 19:38:03 crc kubenswrapper[4787]: I0126 19:38:03.591013 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:38:03 crc kubenswrapper[4787]: E0126 19:38:03.591296 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:38:05 crc kubenswrapper[4787]: I0126 19:38:05.039180 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-f8b8t"] Jan 26 19:38:05 crc kubenswrapper[4787]: I0126 19:38:05.050323 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-f8b8t"] Jan 26 19:38:05 crc kubenswrapper[4787]: I0126 19:38:05.606618 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="490efd8f-2084-4275-96a4-8d458e201ed1" path="/var/lib/kubelet/pods/490efd8f-2084-4275-96a4-8d458e201ed1/volumes" Jan 26 19:38:17 crc kubenswrapper[4787]: I0126 19:38:17.589115 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:38:17 crc kubenswrapper[4787]: E0126 19:38:17.590100 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:38:32 crc kubenswrapper[4787]: I0126 19:38:32.589718 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:38:32 crc kubenswrapper[4787]: E0126 19:38:32.590465 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:38:44 crc kubenswrapper[4787]: I0126 19:38:44.590937 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:38:44 crc kubenswrapper[4787]: E0126 19:38:44.591814 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:38:59 crc kubenswrapper[4787]: I0126 19:38:59.589873 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:38:59 crc kubenswrapper[4787]: E0126 19:38:59.590714 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:39:03 crc kubenswrapper[4787]: I0126 19:39:03.646841 4787 scope.go:117] "RemoveContainer" containerID="ca07cc9ef2af9c6e98ff72a9fa3d82c44436d8df3fe0221a4f74777bd795b99a" Jan 26 19:39:13 crc kubenswrapper[4787]: I0126 19:39:13.589864 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:39:13 crc kubenswrapper[4787]: E0126 19:39:13.590621 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:39:25 crc kubenswrapper[4787]: I0126 19:39:25.590658 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:39:25 crc kubenswrapper[4787]: E0126 19:39:25.591981 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:39:40 crc kubenswrapper[4787]: I0126 19:39:40.590115 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:39:40 crc kubenswrapper[4787]: E0126 19:39:40.591007 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:39:53 crc kubenswrapper[4787]: I0126 19:39:53.589908 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:39:53 crc kubenswrapper[4787]: E0126 19:39:53.590671 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:40:03 crc kubenswrapper[4787]: I0126 19:40:03.731257 4787 scope.go:117] "RemoveContainer" containerID="1ffa7d364b7a9dd4c2656294370781768ff1e04c4c236edc510be879c609ce7d" Jan 26 19:40:03 crc kubenswrapper[4787]: I0126 19:40:03.766378 4787 scope.go:117] "RemoveContainer" containerID="98f8ba2b7c9961256ab9dad0400227a8a5ba887d3cb9d427d65d9755f77fc316" Jan 26 19:40:03 crc kubenswrapper[4787]: I0126 19:40:03.801701 4787 scope.go:117] "RemoveContainer" containerID="cb2f1247a7a8a83941c24569c2c39c2fcfd040c986244d2943f144737a1394ed" Jan 26 19:40:07 crc kubenswrapper[4787]: I0126 19:40:07.591872 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:40:07 crc kubenswrapper[4787]: E0126 19:40:07.592666 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:40:19 crc kubenswrapper[4787]: I0126 19:40:19.589984 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:40:19 crc kubenswrapper[4787]: E0126 19:40:19.591052 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:40:34 crc kubenswrapper[4787]: I0126 19:40:34.589697 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:40:34 crc kubenswrapper[4787]: E0126 19:40:34.590528 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:40:47 crc kubenswrapper[4787]: I0126 19:40:47.589755 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:40:48 crc kubenswrapper[4787]: I0126 19:40:48.425190 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"d6c7d4eb104591d8bce593f41cd34764d38005a94c22d20c3aeb66e411092dcb"} Jan 26 19:40:57 crc kubenswrapper[4787]: I0126 19:40:57.509508 4787 generic.go:334] "Generic (PLEG): container finished" podID="c86490c4-11c2-4a38-9de0-cdb076526ea1" containerID="dff20b46aeaa92464c3d360f92f313e281cc7713c55e362317aa78769207a948" exitCode=0 Jan 26 19:40:57 crc kubenswrapper[4787]: I0126 19:40:57.509595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" event={"ID":"c86490c4-11c2-4a38-9de0-cdb076526ea1","Type":"ContainerDied","Data":"dff20b46aeaa92464c3d360f92f313e281cc7713c55e362317aa78769207a948"} Jan 26 19:40:58 crc kubenswrapper[4787]: I0126 19:40:58.960023 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.072815 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ceph\") pod \"c86490c4-11c2-4a38-9de0-cdb076526ea1\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.072883 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ssh-key-openstack-cell1\") pod \"c86490c4-11c2-4a38-9de0-cdb076526ea1\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.073131 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnpp7\" (UniqueName: \"kubernetes.io/projected/c86490c4-11c2-4a38-9de0-cdb076526ea1-kube-api-access-dnpp7\") pod \"c86490c4-11c2-4a38-9de0-cdb076526ea1\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.073160 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-inventory\") pod \"c86490c4-11c2-4a38-9de0-cdb076526ea1\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.073216 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-tripleo-cleanup-combined-ca-bundle\") pod \"c86490c4-11c2-4a38-9de0-cdb076526ea1\" (UID: \"c86490c4-11c2-4a38-9de0-cdb076526ea1\") " Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.081301 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ceph" (OuterVolumeSpecName: "ceph") pod "c86490c4-11c2-4a38-9de0-cdb076526ea1" (UID: "c86490c4-11c2-4a38-9de0-cdb076526ea1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.081907 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86490c4-11c2-4a38-9de0-cdb076526ea1-kube-api-access-dnpp7" (OuterVolumeSpecName: "kube-api-access-dnpp7") pod "c86490c4-11c2-4a38-9de0-cdb076526ea1" (UID: "c86490c4-11c2-4a38-9de0-cdb076526ea1"). InnerVolumeSpecName "kube-api-access-dnpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.085218 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "c86490c4-11c2-4a38-9de0-cdb076526ea1" (UID: "c86490c4-11c2-4a38-9de0-cdb076526ea1"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.110359 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c86490c4-11c2-4a38-9de0-cdb076526ea1" (UID: "c86490c4-11c2-4a38-9de0-cdb076526ea1"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.112236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-inventory" (OuterVolumeSpecName: "inventory") pod "c86490c4-11c2-4a38-9de0-cdb076526ea1" (UID: "c86490c4-11c2-4a38-9de0-cdb076526ea1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.176496 4787 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.176529 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.176544 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.176555 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnpp7\" (UniqueName: \"kubernetes.io/projected/c86490c4-11c2-4a38-9de0-cdb076526ea1-kube-api-access-dnpp7\") on node \"crc\" DevicePath \"\"" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.176564 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c86490c4-11c2-4a38-9de0-cdb076526ea1-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.533030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" event={"ID":"c86490c4-11c2-4a38-9de0-cdb076526ea1","Type":"ContainerDied","Data":"139726cb94a854203734c894b36c402b59013c4286047a98e60ecd861f3f202e"} Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.533642 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139726cb94a854203734c894b36c402b59013c4286047a98e60ecd861f3f202e" Jan 26 19:40:59 crc kubenswrapper[4787]: I0126 19:40:59.533167 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.342213 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-v6ng2"] Jan 26 19:41:02 crc kubenswrapper[4787]: E0126 19:41:02.343400 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86490c4-11c2-4a38-9de0-cdb076526ea1" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.343419 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86490c4-11c2-4a38-9de0-cdb076526ea1" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 26 19:41:02 crc kubenswrapper[4787]: E0126 19:41:02.343432 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="registry-server" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.343439 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="registry-server" Jan 26 19:41:02 crc kubenswrapper[4787]: E0126 19:41:02.343455 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="extract-utilities" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.343461 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="extract-utilities" Jan 26 19:41:02 crc kubenswrapper[4787]: E0126 19:41:02.343485 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="extract-content" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.343491 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="extract-content" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.343675 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcbd4b1-b94b-4f5d-8706-2cf9e3bacd86" containerName="registry-server" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.343691 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86490c4-11c2-4a38-9de0-cdb076526ea1" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.344449 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.348218 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.349065 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.349419 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.349959 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.356018 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ceph\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.356081 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrxl\" (UniqueName: \"kubernetes.io/projected/68827c36-2f69-4ec0-a472-29afc9bb73ce-kube-api-access-dgrxl\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.356144 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-inventory\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.356179 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.356215 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.357214 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-v6ng2"] Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.457809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-inventory\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.457886 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.457929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.458074 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ceph\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.458136 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrxl\" (UniqueName: \"kubernetes.io/projected/68827c36-2f69-4ec0-a472-29afc9bb73ce-kube-api-access-dgrxl\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.464501 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.466502 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.467790 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-inventory\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.468499 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ceph\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.476251 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrxl\" (UniqueName: \"kubernetes.io/projected/68827c36-2f69-4ec0-a472-29afc9bb73ce-kube-api-access-dgrxl\") pod \"bootstrap-openstack-openstack-cell1-v6ng2\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:02 crc kubenswrapper[4787]: I0126 19:41:02.666059 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:41:03 crc kubenswrapper[4787]: I0126 19:41:03.175307 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-v6ng2"] Jan 26 19:41:03 crc kubenswrapper[4787]: I0126 19:41:03.181317 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:41:03 crc kubenswrapper[4787]: I0126 19:41:03.572833 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" event={"ID":"68827c36-2f69-4ec0-a472-29afc9bb73ce","Type":"ContainerStarted","Data":"547968c51ba0e018dfbc4703d4c648e8ba4f46fb15441f07ad66416b4bd7a90d"} Jan 26 19:41:04 crc kubenswrapper[4787]: I0126 19:41:04.583228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" event={"ID":"68827c36-2f69-4ec0-a472-29afc9bb73ce","Type":"ContainerStarted","Data":"251a4f74f5e2ce3d801d7e3ee428a79bf7e79f323d9b5e8d0d4cca0616d97b7d"} Jan 26 19:41:04 crc kubenswrapper[4787]: I0126 19:41:04.611198 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" podStartSLOduration=2.15117712 podStartE2EDuration="2.611179897s" podCreationTimestamp="2026-01-26 19:41:02 +0000 UTC" firstStartedPulling="2026-01-26 19:41:03.181116968 +0000 UTC m=+7031.888253101" lastFinishedPulling="2026-01-26 19:41:03.641119745 +0000 UTC m=+7032.348255878" observedRunningTime="2026-01-26 19:41:04.604314699 +0000 UTC m=+7033.311450832" watchObservedRunningTime="2026-01-26 19:41:04.611179897 +0000 UTC m=+7033.318316030" Jan 26 19:43:16 crc kubenswrapper[4787]: I0126 19:43:16.808041 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:43:16 crc kubenswrapper[4787]: I0126 19:43:16.808661 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:43:46 crc kubenswrapper[4787]: I0126 19:43:46.810609 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:43:46 crc kubenswrapper[4787]: I0126 19:43:46.811704 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:44:06 crc kubenswrapper[4787]: I0126 19:44:06.530403 4787 generic.go:334] "Generic (PLEG): container finished" podID="68827c36-2f69-4ec0-a472-29afc9bb73ce" containerID="251a4f74f5e2ce3d801d7e3ee428a79bf7e79f323d9b5e8d0d4cca0616d97b7d" exitCode=0 Jan 26 19:44:06 crc kubenswrapper[4787]: I0126 19:44:06.530500 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" event={"ID":"68827c36-2f69-4ec0-a472-29afc9bb73ce","Type":"ContainerDied","Data":"251a4f74f5e2ce3d801d7e3ee428a79bf7e79f323d9b5e8d0d4cca0616d97b7d"} Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.046104 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.230321 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrxl\" (UniqueName: \"kubernetes.io/projected/68827c36-2f69-4ec0-a472-29afc9bb73ce-kube-api-access-dgrxl\") pod \"68827c36-2f69-4ec0-a472-29afc9bb73ce\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.230432 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ceph\") pod \"68827c36-2f69-4ec0-a472-29afc9bb73ce\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.230473 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-bootstrap-combined-ca-bundle\") pod \"68827c36-2f69-4ec0-a472-29afc9bb73ce\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.230509 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-inventory\") pod \"68827c36-2f69-4ec0-a472-29afc9bb73ce\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.230533 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ssh-key-openstack-cell1\") pod \"68827c36-2f69-4ec0-a472-29afc9bb73ce\" (UID: \"68827c36-2f69-4ec0-a472-29afc9bb73ce\") " Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.238800 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "68827c36-2f69-4ec0-a472-29afc9bb73ce" (UID: "68827c36-2f69-4ec0-a472-29afc9bb73ce"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.238928 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ceph" (OuterVolumeSpecName: "ceph") pod "68827c36-2f69-4ec0-a472-29afc9bb73ce" (UID: "68827c36-2f69-4ec0-a472-29afc9bb73ce"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.239985 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68827c36-2f69-4ec0-a472-29afc9bb73ce-kube-api-access-dgrxl" (OuterVolumeSpecName: "kube-api-access-dgrxl") pod "68827c36-2f69-4ec0-a472-29afc9bb73ce" (UID: "68827c36-2f69-4ec0-a472-29afc9bb73ce"). InnerVolumeSpecName "kube-api-access-dgrxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.266477 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-inventory" (OuterVolumeSpecName: "inventory") pod "68827c36-2f69-4ec0-a472-29afc9bb73ce" (UID: "68827c36-2f69-4ec0-a472-29afc9bb73ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.276659 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "68827c36-2f69-4ec0-a472-29afc9bb73ce" (UID: "68827c36-2f69-4ec0-a472-29afc9bb73ce"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.333405 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.333474 4787 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.333491 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.333503 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/68827c36-2f69-4ec0-a472-29afc9bb73ce-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.333513 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgrxl\" (UniqueName: \"kubernetes.io/projected/68827c36-2f69-4ec0-a472-29afc9bb73ce-kube-api-access-dgrxl\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.551050 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" event={"ID":"68827c36-2f69-4ec0-a472-29afc9bb73ce","Type":"ContainerDied","Data":"547968c51ba0e018dfbc4703d4c648e8ba4f46fb15441f07ad66416b4bd7a90d"} Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.551094 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="547968c51ba0e018dfbc4703d4c648e8ba4f46fb15441f07ad66416b4bd7a90d" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.551174 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-v6ng2" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.675754 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-v7456"] Jan 26 19:44:08 crc kubenswrapper[4787]: E0126 19:44:08.676519 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68827c36-2f69-4ec0-a472-29afc9bb73ce" containerName="bootstrap-openstack-openstack-cell1" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.676541 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="68827c36-2f69-4ec0-a472-29afc9bb73ce" containerName="bootstrap-openstack-openstack-cell1" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.676815 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="68827c36-2f69-4ec0-a472-29afc9bb73ce" containerName="bootstrap-openstack-openstack-cell1" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.677883 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.683887 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.684189 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.684431 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.685755 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.694887 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-v7456"] Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.844735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.844809 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ceph\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.844864 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlm7d\" (UniqueName: \"kubernetes.io/projected/8a7b5882-d776-47f7-a895-ff7728795475-kube-api-access-rlm7d\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.845041 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-inventory\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.947231 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-inventory\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.947594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.947663 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ceph\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.947742 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlm7d\" (UniqueName: \"kubernetes.io/projected/8a7b5882-d776-47f7-a895-ff7728795475-kube-api-access-rlm7d\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.954396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.955249 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-inventory\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.962546 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ceph\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:08 crc kubenswrapper[4787]: I0126 19:44:08.966070 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlm7d\" (UniqueName: \"kubernetes.io/projected/8a7b5882-d776-47f7-a895-ff7728795475-kube-api-access-rlm7d\") pod \"download-cache-openstack-openstack-cell1-v7456\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:09 crc kubenswrapper[4787]: I0126 19:44:09.014810 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:44:09 crc kubenswrapper[4787]: I0126 19:44:09.618965 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-v7456"] Jan 26 19:44:10 crc kubenswrapper[4787]: I0126 19:44:10.579766 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-v7456" event={"ID":"8a7b5882-d776-47f7-a895-ff7728795475","Type":"ContainerStarted","Data":"d6ceab0c65dab244d705b9e185a23af23745a63d05516d709e3cb6ad08d9f6d4"} Jan 26 19:44:10 crc kubenswrapper[4787]: I0126 19:44:10.580185 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-v7456" event={"ID":"8a7b5882-d776-47f7-a895-ff7728795475","Type":"ContainerStarted","Data":"7d4811bd8414c87c13a1d72539a785a8dfa3d825eed7e234d846e1bfde1b033d"} Jan 26 19:44:10 crc kubenswrapper[4787]: I0126 19:44:10.596707 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-v7456" podStartSLOduration=2.001257695 podStartE2EDuration="2.596692513s" podCreationTimestamp="2026-01-26 19:44:08 +0000 UTC" firstStartedPulling="2026-01-26 19:44:09.626401265 +0000 UTC m=+7218.333537408" lastFinishedPulling="2026-01-26 19:44:10.221836093 +0000 UTC m=+7218.928972226" observedRunningTime="2026-01-26 19:44:10.595491813 +0000 UTC m=+7219.302627936" watchObservedRunningTime="2026-01-26 19:44:10.596692513 +0000 UTC m=+7219.303828646" Jan 26 19:44:16 crc kubenswrapper[4787]: I0126 19:44:16.809066 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:44:16 crc kubenswrapper[4787]: I0126 19:44:16.809710 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:44:16 crc kubenswrapper[4787]: I0126 19:44:16.809775 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:44:16 crc kubenswrapper[4787]: I0126 19:44:16.811071 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6c7d4eb104591d8bce593f41cd34764d38005a94c22d20c3aeb66e411092dcb"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:44:16 crc kubenswrapper[4787]: I0126 19:44:16.811206 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://d6c7d4eb104591d8bce593f41cd34764d38005a94c22d20c3aeb66e411092dcb" gracePeriod=600 Jan 26 19:44:17 crc kubenswrapper[4787]: I0126 19:44:17.652913 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="d6c7d4eb104591d8bce593f41cd34764d38005a94c22d20c3aeb66e411092dcb" exitCode=0 Jan 26 19:44:17 crc kubenswrapper[4787]: I0126 19:44:17.652988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"d6c7d4eb104591d8bce593f41cd34764d38005a94c22d20c3aeb66e411092dcb"} Jan 26 19:44:17 crc kubenswrapper[4787]: I0126 19:44:17.653418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920"} Jan 26 19:44:17 crc kubenswrapper[4787]: I0126 19:44:17.653435 4787 scope.go:117] "RemoveContainer" containerID="74f76c98d1d155f96d9a31b58c293432ac2fbb5925c39355929fbeb3ee4b2cdb" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.401554 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xfsx6"] Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.404893 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.420404 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfsx6"] Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.433655 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-catalog-content\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.433744 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-utilities\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.433829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7ff\" (UniqueName: \"kubernetes.io/projected/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-kube-api-access-hq7ff\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.535362 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-catalog-content\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.535613 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-utilities\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.535666 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7ff\" (UniqueName: \"kubernetes.io/projected/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-kube-api-access-hq7ff\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.536034 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-catalog-content\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.536580 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-utilities\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.562458 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7ff\" (UniqueName: \"kubernetes.io/projected/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-kube-api-access-hq7ff\") pod \"redhat-marketplace-xfsx6\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:27 crc kubenswrapper[4787]: I0126 19:44:27.743769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:28 crc kubenswrapper[4787]: I0126 19:44:28.272244 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfsx6"] Jan 26 19:44:28 crc kubenswrapper[4787]: I0126 19:44:28.765233 4787 generic.go:334] "Generic (PLEG): container finished" podID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerID="3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc" exitCode=0 Jan 26 19:44:28 crc kubenswrapper[4787]: I0126 19:44:28.765292 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfsx6" event={"ID":"58cd99a2-b2c1-4bbb-aaec-163629ee1e64","Type":"ContainerDied","Data":"3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc"} Jan 26 19:44:28 crc kubenswrapper[4787]: I0126 19:44:28.765631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfsx6" event={"ID":"58cd99a2-b2c1-4bbb-aaec-163629ee1e64","Type":"ContainerStarted","Data":"7f0c2b3921e3175ee29f8d28ed9a264aa90bd5657263a592ab6b4096fd74b4c0"} Jan 26 19:44:30 crc kubenswrapper[4787]: I0126 19:44:30.788268 4787 generic.go:334] "Generic (PLEG): container finished" podID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerID="8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12" exitCode=0 Jan 26 19:44:30 crc kubenswrapper[4787]: I0126 19:44:30.788810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfsx6" event={"ID":"58cd99a2-b2c1-4bbb-aaec-163629ee1e64","Type":"ContainerDied","Data":"8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12"} Jan 26 19:44:31 crc kubenswrapper[4787]: I0126 19:44:31.801985 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfsx6" event={"ID":"58cd99a2-b2c1-4bbb-aaec-163629ee1e64","Type":"ContainerStarted","Data":"ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60"} Jan 26 19:44:31 crc kubenswrapper[4787]: I0126 19:44:31.825907 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xfsx6" podStartSLOduration=2.380673254 podStartE2EDuration="4.825884692s" podCreationTimestamp="2026-01-26 19:44:27 +0000 UTC" firstStartedPulling="2026-01-26 19:44:28.767128012 +0000 UTC m=+7237.474264135" lastFinishedPulling="2026-01-26 19:44:31.21233944 +0000 UTC m=+7239.919475573" observedRunningTime="2026-01-26 19:44:31.825111942 +0000 UTC m=+7240.532248075" watchObservedRunningTime="2026-01-26 19:44:31.825884692 +0000 UTC m=+7240.533020825" Jan 26 19:44:37 crc kubenswrapper[4787]: I0126 19:44:37.744457 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:37 crc kubenswrapper[4787]: I0126 19:44:37.745000 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:37 crc kubenswrapper[4787]: I0126 19:44:37.799366 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:37 crc kubenswrapper[4787]: I0126 19:44:37.904713 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:38 crc kubenswrapper[4787]: I0126 19:44:38.036153 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfsx6"] Jan 26 19:44:39 crc kubenswrapper[4787]: I0126 19:44:39.879776 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xfsx6" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="registry-server" containerID="cri-o://ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60" gracePeriod=2 Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.355209 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.529047 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-utilities\") pod \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.529298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-catalog-content\") pod \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.529351 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7ff\" (UniqueName: \"kubernetes.io/projected/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-kube-api-access-hq7ff\") pod \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\" (UID: \"58cd99a2-b2c1-4bbb-aaec-163629ee1e64\") " Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.530131 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-utilities" (OuterVolumeSpecName: "utilities") pod "58cd99a2-b2c1-4bbb-aaec-163629ee1e64" (UID: "58cd99a2-b2c1-4bbb-aaec-163629ee1e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.534985 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-kube-api-access-hq7ff" (OuterVolumeSpecName: "kube-api-access-hq7ff") pod "58cd99a2-b2c1-4bbb-aaec-163629ee1e64" (UID: "58cd99a2-b2c1-4bbb-aaec-163629ee1e64"). InnerVolumeSpecName "kube-api-access-hq7ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.558633 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58cd99a2-b2c1-4bbb-aaec-163629ee1e64" (UID: "58cd99a2-b2c1-4bbb-aaec-163629ee1e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.632034 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.632064 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.632078 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7ff\" (UniqueName: \"kubernetes.io/projected/58cd99a2-b2c1-4bbb-aaec-163629ee1e64-kube-api-access-hq7ff\") on node \"crc\" DevicePath \"\"" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.892632 4787 generic.go:334] "Generic (PLEG): container finished" podID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerID="ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60" exitCode=0 Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.892742 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfsx6" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.892725 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfsx6" event={"ID":"58cd99a2-b2c1-4bbb-aaec-163629ee1e64","Type":"ContainerDied","Data":"ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60"} Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.893608 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfsx6" event={"ID":"58cd99a2-b2c1-4bbb-aaec-163629ee1e64","Type":"ContainerDied","Data":"7f0c2b3921e3175ee29f8d28ed9a264aa90bd5657263a592ab6b4096fd74b4c0"} Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.893643 4787 scope.go:117] "RemoveContainer" containerID="ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.924711 4787 scope.go:117] "RemoveContainer" containerID="8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.933296 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfsx6"] Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.950787 4787 scope.go:117] "RemoveContainer" containerID="3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.952594 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfsx6"] Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.996911 4787 scope.go:117] "RemoveContainer" containerID="ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60" Jan 26 19:44:40 crc kubenswrapper[4787]: E0126 19:44:40.998100 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60\": container with ID starting with ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60 not found: ID does not exist" containerID="ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.998145 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60"} err="failed to get container status \"ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60\": rpc error: code = NotFound desc = could not find container \"ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60\": container with ID starting with ad347407edc74c70e4e712ff4c957745e821c8c43b864abaf091d837d1434c60 not found: ID does not exist" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.998179 4787 scope.go:117] "RemoveContainer" containerID="8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12" Jan 26 19:44:40 crc kubenswrapper[4787]: E0126 19:44:40.998747 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12\": container with ID starting with 8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12 not found: ID does not exist" containerID="8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.998793 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12"} err="failed to get container status \"8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12\": rpc error: code = NotFound desc = could not find container \"8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12\": container with ID starting with 8e87e6a17b8d5d210b18b33086e649ef0f77ba17a3af9f4081a074c3ed235b12 not found: ID does not exist" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.998825 4787 scope.go:117] "RemoveContainer" containerID="3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc" Jan 26 19:44:40 crc kubenswrapper[4787]: E0126 19:44:40.999156 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc\": container with ID starting with 3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc not found: ID does not exist" containerID="3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc" Jan 26 19:44:40 crc kubenswrapper[4787]: I0126 19:44:40.999188 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc"} err="failed to get container status \"3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc\": rpc error: code = NotFound desc = could not find container \"3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc\": container with ID starting with 3c35c5b314567cd88024d62703109e943851080682aca53dc51db5f97f7d1adc not found: ID does not exist" Jan 26 19:44:41 crc kubenswrapper[4787]: I0126 19:44:41.604335 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" path="/var/lib/kubelet/pods/58cd99a2-b2c1-4bbb-aaec-163629ee1e64/volumes" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.165135 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv"] Jan 26 19:45:00 crc kubenswrapper[4787]: E0126 19:45:00.166084 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.166101 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4787]: E0126 19:45:00.166120 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.166127 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4787]: E0126 19:45:00.166161 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.166167 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.166395 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cd99a2-b2c1-4bbb-aaec-163629ee1e64" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.167360 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.169695 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.175316 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.187960 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv"] Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.355975 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-config-volume\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.356047 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8t4\" (UniqueName: \"kubernetes.io/projected/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-kube-api-access-cf8t4\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.356190 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-secret-volume\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.458331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-config-volume\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.458424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8t4\" (UniqueName: \"kubernetes.io/projected/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-kube-api-access-cf8t4\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.458600 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-secret-volume\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.460166 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-config-volume\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.464574 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-secret-volume\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.482349 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8t4\" (UniqueName: \"kubernetes.io/projected/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-kube-api-access-cf8t4\") pod \"collect-profiles-29490945-zkthv\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.496164 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:00 crc kubenswrapper[4787]: I0126 19:45:00.809110 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv"] Jan 26 19:45:01 crc kubenswrapper[4787]: I0126 19:45:01.100430 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" event={"ID":"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b","Type":"ContainerStarted","Data":"4061af3e87cdad72fc86fac700950b64639f9cfa939a380c18c4efc974aef4b9"} Jan 26 19:45:01 crc kubenswrapper[4787]: I0126 19:45:01.100685 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" event={"ID":"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b","Type":"ContainerStarted","Data":"1f1f6e7d2c7c3bfe66a3e4767d09be780ee82371e6e8c1b17ca8ae4f9163ca83"} Jan 26 19:45:01 crc kubenswrapper[4787]: I0126 19:45:01.123177 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" podStartSLOduration=1.123159272 podStartE2EDuration="1.123159272s" podCreationTimestamp="2026-01-26 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:45:01.113789082 +0000 UTC m=+7269.820925215" watchObservedRunningTime="2026-01-26 19:45:01.123159272 +0000 UTC m=+7269.830295405" Jan 26 19:45:02 crc kubenswrapper[4787]: I0126 19:45:02.111666 4787 generic.go:334] "Generic (PLEG): container finished" podID="ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" containerID="4061af3e87cdad72fc86fac700950b64639f9cfa939a380c18c4efc974aef4b9" exitCode=0 Jan 26 19:45:02 crc kubenswrapper[4787]: I0126 19:45:02.111775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" event={"ID":"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b","Type":"ContainerDied","Data":"4061af3e87cdad72fc86fac700950b64639f9cfa939a380c18c4efc974aef4b9"} Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.520487 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.526610 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-config-volume\") pod \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.526747 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf8t4\" (UniqueName: \"kubernetes.io/projected/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-kube-api-access-cf8t4\") pod \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.526859 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-secret-volume\") pod \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\" (UID: \"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b\") " Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.527376 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" (UID: "ef9034e6-d02c-4c85-8c8a-24013cdbdb1b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.531914 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-kube-api-access-cf8t4" (OuterVolumeSpecName: "kube-api-access-cf8t4") pod "ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" (UID: "ef9034e6-d02c-4c85-8c8a-24013cdbdb1b"). InnerVolumeSpecName "kube-api-access-cf8t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.532258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" (UID: "ef9034e6-d02c-4c85-8c8a-24013cdbdb1b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.628817 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.628850 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:03 crc kubenswrapper[4787]: I0126 19:45:03.628863 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf8t4\" (UniqueName: \"kubernetes.io/projected/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b-kube-api-access-cf8t4\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:04 crc kubenswrapper[4787]: I0126 19:45:04.139441 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" event={"ID":"ef9034e6-d02c-4c85-8c8a-24013cdbdb1b","Type":"ContainerDied","Data":"1f1f6e7d2c7c3bfe66a3e4767d09be780ee82371e6e8c1b17ca8ae4f9163ca83"} Jan 26 19:45:04 crc kubenswrapper[4787]: I0126 19:45:04.140177 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1f6e7d2c7c3bfe66a3e4767d09be780ee82371e6e8c1b17ca8ae4f9163ca83" Jan 26 19:45:04 crc kubenswrapper[4787]: I0126 19:45:04.139494 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv" Jan 26 19:45:04 crc kubenswrapper[4787]: I0126 19:45:04.603488 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm"] Jan 26 19:45:04 crc kubenswrapper[4787]: I0126 19:45:04.615148 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-544pm"] Jan 26 19:45:05 crc kubenswrapper[4787]: I0126 19:45:05.603762 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431e05a1-e497-43f1-94b5-af113f71f052" path="/var/lib/kubelet/pods/431e05a1-e497-43f1-94b5-af113f71f052/volumes" Jan 26 19:45:42 crc kubenswrapper[4787]: I0126 19:45:42.544969 4787 generic.go:334] "Generic (PLEG): container finished" podID="8a7b5882-d776-47f7-a895-ff7728795475" containerID="d6ceab0c65dab244d705b9e185a23af23745a63d05516d709e3cb6ad08d9f6d4" exitCode=0 Jan 26 19:45:42 crc kubenswrapper[4787]: I0126 19:45:42.545047 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-v7456" event={"ID":"8a7b5882-d776-47f7-a895-ff7728795475","Type":"ContainerDied","Data":"d6ceab0c65dab244d705b9e185a23af23745a63d05516d709e3cb6ad08d9f6d4"} Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.097123 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.153902 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ceph\") pod \"8a7b5882-d776-47f7-a895-ff7728795475\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.154028 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ssh-key-openstack-cell1\") pod \"8a7b5882-d776-47f7-a895-ff7728795475\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.154150 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-inventory\") pod \"8a7b5882-d776-47f7-a895-ff7728795475\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.154290 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlm7d\" (UniqueName: \"kubernetes.io/projected/8a7b5882-d776-47f7-a895-ff7728795475-kube-api-access-rlm7d\") pod \"8a7b5882-d776-47f7-a895-ff7728795475\" (UID: \"8a7b5882-d776-47f7-a895-ff7728795475\") " Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.159643 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7b5882-d776-47f7-a895-ff7728795475-kube-api-access-rlm7d" (OuterVolumeSpecName: "kube-api-access-rlm7d") pod "8a7b5882-d776-47f7-a895-ff7728795475" (UID: "8a7b5882-d776-47f7-a895-ff7728795475"). InnerVolumeSpecName "kube-api-access-rlm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.159838 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ceph" (OuterVolumeSpecName: "ceph") pod "8a7b5882-d776-47f7-a895-ff7728795475" (UID: "8a7b5882-d776-47f7-a895-ff7728795475"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.188063 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-inventory" (OuterVolumeSpecName: "inventory") pod "8a7b5882-d776-47f7-a895-ff7728795475" (UID: "8a7b5882-d776-47f7-a895-ff7728795475"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.188445 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8a7b5882-d776-47f7-a895-ff7728795475" (UID: "8a7b5882-d776-47f7-a895-ff7728795475"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.256862 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.256900 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlm7d\" (UniqueName: \"kubernetes.io/projected/8a7b5882-d776-47f7-a895-ff7728795475-kube-api-access-rlm7d\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.256913 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.256924 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8a7b5882-d776-47f7-a895-ff7728795475-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.563854 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-v7456" event={"ID":"8a7b5882-d776-47f7-a895-ff7728795475","Type":"ContainerDied","Data":"7d4811bd8414c87c13a1d72539a785a8dfa3d825eed7e234d846e1bfde1b033d"} Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.563893 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d4811bd8414c87c13a1d72539a785a8dfa3d825eed7e234d846e1bfde1b033d" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.563937 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-v7456" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.654797 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-8d7vx"] Jan 26 19:45:44 crc kubenswrapper[4787]: E0126 19:45:44.655334 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" containerName="collect-profiles" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.655354 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" containerName="collect-profiles" Jan 26 19:45:44 crc kubenswrapper[4787]: E0126 19:45:44.655421 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7b5882-d776-47f7-a895-ff7728795475" containerName="download-cache-openstack-openstack-cell1" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.655430 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7b5882-d776-47f7-a895-ff7728795475" containerName="download-cache-openstack-openstack-cell1" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.655724 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7b5882-d776-47f7-a895-ff7728795475" containerName="download-cache-openstack-openstack-cell1" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.655760 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" containerName="collect-profiles" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.656647 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.659159 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.672199 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-8d7vx"] Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.703387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.703754 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.704182 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.869580 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f847w\" (UniqueName: \"kubernetes.io/projected/afa3f89e-2ba2-46b5-b87b-7d572971a173-kube-api-access-f847w\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.869664 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-inventory\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.869702 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.869782 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ceph\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.972308 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ceph\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.972452 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f847w\" (UniqueName: \"kubernetes.io/projected/afa3f89e-2ba2-46b5-b87b-7d572971a173-kube-api-access-f847w\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.972519 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-inventory\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.972554 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.979916 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.980503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ceph\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.988871 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-inventory\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:44 crc kubenswrapper[4787]: I0126 19:45:44.989855 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f847w\" (UniqueName: \"kubernetes.io/projected/afa3f89e-2ba2-46b5-b87b-7d572971a173-kube-api-access-f847w\") pod \"configure-network-openstack-openstack-cell1-8d7vx\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:45 crc kubenswrapper[4787]: I0126 19:45:45.013471 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:45:45 crc kubenswrapper[4787]: I0126 19:45:45.715138 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-8d7vx"] Jan 26 19:45:45 crc kubenswrapper[4787]: W0126 19:45:45.722054 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafa3f89e_2ba2_46b5_b87b_7d572971a173.slice/crio-663a8d2cf390e3c9d0db810dbc7f4b3f0a0b55d4e5efbb2930e174cabc549e2d WatchSource:0}: Error finding container 663a8d2cf390e3c9d0db810dbc7f4b3f0a0b55d4e5efbb2930e174cabc549e2d: Status 404 returned error can't find the container with id 663a8d2cf390e3c9d0db810dbc7f4b3f0a0b55d4e5efbb2930e174cabc549e2d Jan 26 19:45:46 crc kubenswrapper[4787]: I0126 19:45:46.586914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" event={"ID":"afa3f89e-2ba2-46b5-b87b-7d572971a173","Type":"ContainerStarted","Data":"0142b8a924489cb2144e4884b701a084e8f569b94ee7eb8ef4afa98776314f57"} Jan 26 19:45:46 crc kubenswrapper[4787]: I0126 19:45:46.587435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" event={"ID":"afa3f89e-2ba2-46b5-b87b-7d572971a173","Type":"ContainerStarted","Data":"663a8d2cf390e3c9d0db810dbc7f4b3f0a0b55d4e5efbb2930e174cabc549e2d"} Jan 26 19:45:46 crc kubenswrapper[4787]: I0126 19:45:46.606788 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" podStartSLOduration=2.06109596 podStartE2EDuration="2.606766907s" podCreationTimestamp="2026-01-26 19:45:44 +0000 UTC" firstStartedPulling="2026-01-26 19:45:45.724521335 +0000 UTC m=+7314.431657468" lastFinishedPulling="2026-01-26 19:45:46.270192292 +0000 UTC m=+7314.977328415" observedRunningTime="2026-01-26 19:45:46.604203774 +0000 UTC m=+7315.311339907" watchObservedRunningTime="2026-01-26 19:45:46.606766907 +0000 UTC m=+7315.313903030" Jan 26 19:46:03 crc kubenswrapper[4787]: I0126 19:46:03.979977 4787 scope.go:117] "RemoveContainer" containerID="6d472819035d61f35ca2c9bb3ce13e24a58ac5c68f159f2ce0cbd440ed4601da" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.718050 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pjgjs"] Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.721352 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.743439 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjgjs"] Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.821915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-catalog-content\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.822165 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27sc\" (UniqueName: \"kubernetes.io/projected/0b406b51-5f61-4549-a7cd-cd39a853c4dd-kube-api-access-b27sc\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.822233 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-utilities\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.924161 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27sc\" (UniqueName: \"kubernetes.io/projected/0b406b51-5f61-4549-a7cd-cd39a853c4dd-kube-api-access-b27sc\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.924275 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-utilities\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.924386 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-catalog-content\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.924982 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-catalog-content\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.925310 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-utilities\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:39 crc kubenswrapper[4787]: I0126 19:46:39.951799 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27sc\" (UniqueName: \"kubernetes.io/projected/0b406b51-5f61-4549-a7cd-cd39a853c4dd-kube-api-access-b27sc\") pod \"redhat-operators-pjgjs\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:40 crc kubenswrapper[4787]: I0126 19:46:40.065386 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:46:40 crc kubenswrapper[4787]: I0126 19:46:40.556985 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjgjs"] Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.113072 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c7df8"] Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.115369 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.127425 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7df8"] Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.130436 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerID="b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495" exitCode=0 Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.130631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerDied","Data":"b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495"} Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.130739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerStarted","Data":"e5e7f0c5410f5d4ead77873ef6c9dd29fb8e708460d162c57d1128ca7ea7163f"} Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.133758 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.248414 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-catalog-content\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.248818 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/92020097-96c4-494f-a447-df16645e2d7a-kube-api-access-vm2xf\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.248912 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-utilities\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.351727 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-catalog-content\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.351836 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/92020097-96c4-494f-a447-df16645e2d7a-kube-api-access-vm2xf\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.351872 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-utilities\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.352446 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-utilities\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.352500 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-catalog-content\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.372572 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/92020097-96c4-494f-a447-df16645e2d7a-kube-api-access-vm2xf\") pod \"certified-operators-c7df8\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.439081 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:46:41 crc kubenswrapper[4787]: W0126 19:46:41.946139 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92020097_96c4_494f_a447_df16645e2d7a.slice/crio-505b98002fd875f9c32d9fe8926ea37c5527d9748d229c8021cb573ca36989cf WatchSource:0}: Error finding container 505b98002fd875f9c32d9fe8926ea37c5527d9748d229c8021cb573ca36989cf: Status 404 returned error can't find the container with id 505b98002fd875f9c32d9fe8926ea37c5527d9748d229c8021cb573ca36989cf Jan 26 19:46:41 crc kubenswrapper[4787]: I0126 19:46:41.962455 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7df8"] Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.106633 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rrm5q"] Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.109493 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.124883 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrm5q"] Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.145907 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerStarted","Data":"505b98002fd875f9c32d9fe8926ea37c5527d9748d229c8021cb573ca36989cf"} Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.170208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-catalog-content\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.170344 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz6lr\" (UniqueName: \"kubernetes.io/projected/38b459fb-c72c-4a65-bd84-65e43283ad69-kube-api-access-lz6lr\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.170433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-utilities\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.271926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-utilities\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.272114 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-catalog-content\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.272171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz6lr\" (UniqueName: \"kubernetes.io/projected/38b459fb-c72c-4a65-bd84-65e43283ad69-kube-api-access-lz6lr\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.272753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-catalog-content\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.272875 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-utilities\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.292503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz6lr\" (UniqueName: \"kubernetes.io/projected/38b459fb-c72c-4a65-bd84-65e43283ad69-kube-api-access-lz6lr\") pod \"community-operators-rrm5q\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.446346 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:42 crc kubenswrapper[4787]: I0126 19:46:42.998657 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrm5q"] Jan 26 19:46:43 crc kubenswrapper[4787]: W0126 19:46:43.000905 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b459fb_c72c_4a65_bd84_65e43283ad69.slice/crio-7dbb37ff13ecb226cc6aec81792f3f2d2cc2764b00c5bfd12bd0fca446fdcb93 WatchSource:0}: Error finding container 7dbb37ff13ecb226cc6aec81792f3f2d2cc2764b00c5bfd12bd0fca446fdcb93: Status 404 returned error can't find the container with id 7dbb37ff13ecb226cc6aec81792f3f2d2cc2764b00c5bfd12bd0fca446fdcb93 Jan 26 19:46:43 crc kubenswrapper[4787]: I0126 19:46:43.160929 4787 generic.go:334] "Generic (PLEG): container finished" podID="92020097-96c4-494f-a447-df16645e2d7a" containerID="afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49" exitCode=0 Jan 26 19:46:43 crc kubenswrapper[4787]: I0126 19:46:43.161469 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerDied","Data":"afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49"} Jan 26 19:46:43 crc kubenswrapper[4787]: I0126 19:46:43.166095 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerStarted","Data":"7dbb37ff13ecb226cc6aec81792f3f2d2cc2764b00c5bfd12bd0fca446fdcb93"} Jan 26 19:46:44 crc kubenswrapper[4787]: I0126 19:46:44.478255 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerStarted","Data":"6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de"} Jan 26 19:46:44 crc kubenswrapper[4787]: I0126 19:46:44.481845 4787 generic.go:334] "Generic (PLEG): container finished" podID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerID="29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb" exitCode=0 Jan 26 19:46:44 crc kubenswrapper[4787]: I0126 19:46:44.481873 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerDied","Data":"29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb"} Jan 26 19:46:46 crc kubenswrapper[4787]: I0126 19:46:46.808072 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:46:46 crc kubenswrapper[4787]: I0126 19:46:46.808411 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:46:49 crc kubenswrapper[4787]: E0126 19:46:49.497421 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b459fb_c72c_4a65_bd84_65e43283ad69.slice/crio-ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0.scope\": RecentStats: unable to find data in memory cache]" Jan 26 19:46:49 crc kubenswrapper[4787]: I0126 19:46:49.530002 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerID="6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de" exitCode=0 Jan 26 19:46:49 crc kubenswrapper[4787]: I0126 19:46:49.530073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerDied","Data":"6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de"} Jan 26 19:46:49 crc kubenswrapper[4787]: I0126 19:46:49.534299 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerStarted","Data":"8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725"} Jan 26 19:46:49 crc kubenswrapper[4787]: I0126 19:46:49.537412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerStarted","Data":"ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0"} Jan 26 19:46:50 crc kubenswrapper[4787]: I0126 19:46:50.550425 4787 generic.go:334] "Generic (PLEG): container finished" podID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerID="ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0" exitCode=0 Jan 26 19:46:50 crc kubenswrapper[4787]: I0126 19:46:50.550540 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerDied","Data":"ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0"} Jan 26 19:46:51 crc kubenswrapper[4787]: I0126 19:46:51.563899 4787 generic.go:334] "Generic (PLEG): container finished" podID="92020097-96c4-494f-a447-df16645e2d7a" containerID="8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725" exitCode=0 Jan 26 19:46:51 crc kubenswrapper[4787]: I0126 19:46:51.564004 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerDied","Data":"8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725"} Jan 26 19:46:51 crc kubenswrapper[4787]: I0126 19:46:51.580460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerStarted","Data":"b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98"} Jan 26 19:46:51 crc kubenswrapper[4787]: I0126 19:46:51.619629 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerStarted","Data":"e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3"} Jan 26 19:46:51 crc kubenswrapper[4787]: I0126 19:46:51.629227 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rrm5q" podStartSLOduration=2.9140278779999997 podStartE2EDuration="9.629211869s" podCreationTimestamp="2026-01-26 19:46:42 +0000 UTC" firstStartedPulling="2026-01-26 19:46:44.48382315 +0000 UTC m=+7373.190959273" lastFinishedPulling="2026-01-26 19:46:51.199007131 +0000 UTC m=+7379.906143264" observedRunningTime="2026-01-26 19:46:51.628123633 +0000 UTC m=+7380.335259766" watchObservedRunningTime="2026-01-26 19:46:51.629211869 +0000 UTC m=+7380.336348002" Jan 26 19:46:51 crc kubenswrapper[4787]: I0126 19:46:51.656984 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pjgjs" podStartSLOduration=3.16640376 podStartE2EDuration="12.656934878s" podCreationTimestamp="2026-01-26 19:46:39 +0000 UTC" firstStartedPulling="2026-01-26 19:46:41.13355862 +0000 UTC m=+7369.840694753" lastFinishedPulling="2026-01-26 19:46:50.624089738 +0000 UTC m=+7379.331225871" observedRunningTime="2026-01-26 19:46:51.650553573 +0000 UTC m=+7380.357689706" watchObservedRunningTime="2026-01-26 19:46:51.656934878 +0000 UTC m=+7380.364071011" Jan 26 19:46:52 crc kubenswrapper[4787]: I0126 19:46:52.446988 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:52 crc kubenswrapper[4787]: I0126 19:46:52.447592 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:46:52 crc kubenswrapper[4787]: I0126 19:46:52.636345 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerStarted","Data":"8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544"} Jan 26 19:46:52 crc kubenswrapper[4787]: I0126 19:46:52.670389 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c7df8" podStartSLOduration=2.860516521 podStartE2EDuration="11.670366764s" podCreationTimestamp="2026-01-26 19:46:41 +0000 UTC" firstStartedPulling="2026-01-26 19:46:43.16365486 +0000 UTC m=+7371.870790993" lastFinishedPulling="2026-01-26 19:46:51.973505093 +0000 UTC m=+7380.680641236" observedRunningTime="2026-01-26 19:46:52.652979779 +0000 UTC m=+7381.360115932" watchObservedRunningTime="2026-01-26 19:46:52.670366764 +0000 UTC m=+7381.377502897" Jan 26 19:46:53 crc kubenswrapper[4787]: I0126 19:46:53.499349 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rrm5q" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="registry-server" probeResult="failure" output=< Jan 26 19:46:53 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 19:46:53 crc kubenswrapper[4787]: > Jan 26 19:47:00 crc kubenswrapper[4787]: I0126 19:47:00.065747 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:47:00 crc kubenswrapper[4787]: I0126 19:47:00.066265 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:47:01 crc kubenswrapper[4787]: I0126 19:47:01.114505 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pjgjs" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="registry-server" probeResult="failure" output=< Jan 26 19:47:01 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 19:47:01 crc kubenswrapper[4787]: > Jan 26 19:47:01 crc kubenswrapper[4787]: I0126 19:47:01.440024 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:47:01 crc kubenswrapper[4787]: I0126 19:47:01.440085 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:47:01 crc kubenswrapper[4787]: I0126 19:47:01.488338 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:47:01 crc kubenswrapper[4787]: I0126 19:47:01.791548 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:47:01 crc kubenswrapper[4787]: I0126 19:47:01.853336 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c7df8"] Jan 26 19:47:02 crc kubenswrapper[4787]: I0126 19:47:02.504283 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:47:02 crc kubenswrapper[4787]: I0126 19:47:02.562317 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:47:02 crc kubenswrapper[4787]: I0126 19:47:02.735872 4787 generic.go:334] "Generic (PLEG): container finished" podID="afa3f89e-2ba2-46b5-b87b-7d572971a173" containerID="0142b8a924489cb2144e4884b701a084e8f569b94ee7eb8ef4afa98776314f57" exitCode=0 Jan 26 19:47:02 crc kubenswrapper[4787]: I0126 19:47:02.735928 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" event={"ID":"afa3f89e-2ba2-46b5-b87b-7d572971a173","Type":"ContainerDied","Data":"0142b8a924489cb2144e4884b701a084e8f569b94ee7eb8ef4afa98776314f57"} Jan 26 19:47:03 crc kubenswrapper[4787]: I0126 19:47:03.745144 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c7df8" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="registry-server" containerID="cri-o://8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544" gracePeriod=2 Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.133242 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrm5q"] Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.133715 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rrm5q" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="registry-server" containerID="cri-o://b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98" gracePeriod=2 Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.207596 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.316785 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ceph\") pod \"afa3f89e-2ba2-46b5-b87b-7d572971a173\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.316893 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ssh-key-openstack-cell1\") pod \"afa3f89e-2ba2-46b5-b87b-7d572971a173\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.317007 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-inventory\") pod \"afa3f89e-2ba2-46b5-b87b-7d572971a173\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.317193 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f847w\" (UniqueName: \"kubernetes.io/projected/afa3f89e-2ba2-46b5-b87b-7d572971a173-kube-api-access-f847w\") pod \"afa3f89e-2ba2-46b5-b87b-7d572971a173\" (UID: \"afa3f89e-2ba2-46b5-b87b-7d572971a173\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.322853 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ceph" (OuterVolumeSpecName: "ceph") pod "afa3f89e-2ba2-46b5-b87b-7d572971a173" (UID: "afa3f89e-2ba2-46b5-b87b-7d572971a173"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.323163 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa3f89e-2ba2-46b5-b87b-7d572971a173-kube-api-access-f847w" (OuterVolumeSpecName: "kube-api-access-f847w") pod "afa3f89e-2ba2-46b5-b87b-7d572971a173" (UID: "afa3f89e-2ba2-46b5-b87b-7d572971a173"). InnerVolumeSpecName "kube-api-access-f847w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.349866 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "afa3f89e-2ba2-46b5-b87b-7d572971a173" (UID: "afa3f89e-2ba2-46b5-b87b-7d572971a173"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.363501 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-inventory" (OuterVolumeSpecName: "inventory") pod "afa3f89e-2ba2-46b5-b87b-7d572971a173" (UID: "afa3f89e-2ba2-46b5-b87b-7d572971a173"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.381916 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.419567 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/92020097-96c4-494f-a447-df16645e2d7a-kube-api-access-vm2xf\") pod \"92020097-96c4-494f-a447-df16645e2d7a\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.419812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-utilities\") pod \"92020097-96c4-494f-a447-df16645e2d7a\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.419848 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-catalog-content\") pod \"92020097-96c4-494f-a447-df16645e2d7a\" (UID: \"92020097-96c4-494f-a447-df16645e2d7a\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.420498 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f847w\" (UniqueName: \"kubernetes.io/projected/afa3f89e-2ba2-46b5-b87b-7d572971a173-kube-api-access-f847w\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.420521 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.420531 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.420540 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afa3f89e-2ba2-46b5-b87b-7d572971a173-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.420827 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-utilities" (OuterVolumeSpecName: "utilities") pod "92020097-96c4-494f-a447-df16645e2d7a" (UID: "92020097-96c4-494f-a447-df16645e2d7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.423785 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92020097-96c4-494f-a447-df16645e2d7a-kube-api-access-vm2xf" (OuterVolumeSpecName: "kube-api-access-vm2xf") pod "92020097-96c4-494f-a447-df16645e2d7a" (UID: "92020097-96c4-494f-a447-df16645e2d7a"). InnerVolumeSpecName "kube-api-access-vm2xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.465362 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92020097-96c4-494f-a447-df16645e2d7a" (UID: "92020097-96c4-494f-a447-df16645e2d7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.522773 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/92020097-96c4-494f-a447-df16645e2d7a-kube-api-access-vm2xf\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.522833 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.522853 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92020097-96c4-494f-a447-df16645e2d7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.597391 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.624580 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-utilities\") pod \"38b459fb-c72c-4a65-bd84-65e43283ad69\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.624754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-catalog-content\") pod \"38b459fb-c72c-4a65-bd84-65e43283ad69\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.624868 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz6lr\" (UniqueName: \"kubernetes.io/projected/38b459fb-c72c-4a65-bd84-65e43283ad69-kube-api-access-lz6lr\") pod \"38b459fb-c72c-4a65-bd84-65e43283ad69\" (UID: \"38b459fb-c72c-4a65-bd84-65e43283ad69\") " Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.625319 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-utilities" (OuterVolumeSpecName: "utilities") pod "38b459fb-c72c-4a65-bd84-65e43283ad69" (UID: "38b459fb-c72c-4a65-bd84-65e43283ad69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.625679 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.629083 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b459fb-c72c-4a65-bd84-65e43283ad69-kube-api-access-lz6lr" (OuterVolumeSpecName: "kube-api-access-lz6lr") pod "38b459fb-c72c-4a65-bd84-65e43283ad69" (UID: "38b459fb-c72c-4a65-bd84-65e43283ad69"). InnerVolumeSpecName "kube-api-access-lz6lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.697547 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38b459fb-c72c-4a65-bd84-65e43283ad69" (UID: "38b459fb-c72c-4a65-bd84-65e43283ad69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.727406 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38b459fb-c72c-4a65-bd84-65e43283ad69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.727445 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz6lr\" (UniqueName: \"kubernetes.io/projected/38b459fb-c72c-4a65-bd84-65e43283ad69-kube-api-access-lz6lr\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.778140 4787 generic.go:334] "Generic (PLEG): container finished" podID="92020097-96c4-494f-a447-df16645e2d7a" containerID="8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544" exitCode=0 Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.779566 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7df8" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.780054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerDied","Data":"8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544"} Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.780127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7df8" event={"ID":"92020097-96c4-494f-a447-df16645e2d7a","Type":"ContainerDied","Data":"505b98002fd875f9c32d9fe8926ea37c5527d9748d229c8021cb573ca36989cf"} Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.780151 4787 scope.go:117] "RemoveContainer" containerID="8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.796046 4787 generic.go:334] "Generic (PLEG): container finished" podID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerID="b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98" exitCode=0 Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.796127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerDied","Data":"b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98"} Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.796163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrm5q" event={"ID":"38b459fb-c72c-4a65-bd84-65e43283ad69","Type":"ContainerDied","Data":"7dbb37ff13ecb226cc6aec81792f3f2d2cc2764b00c5bfd12bd0fca446fdcb93"} Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.796245 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrm5q" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.827418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" event={"ID":"afa3f89e-2ba2-46b5-b87b-7d572971a173","Type":"ContainerDied","Data":"663a8d2cf390e3c9d0db810dbc7f4b3f0a0b55d4e5efbb2930e174cabc549e2d"} Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.827646 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663a8d2cf390e3c9d0db810dbc7f4b3f0a0b55d4e5efbb2930e174cabc549e2d" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.827809 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-8d7vx" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.852696 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c7df8"] Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.864226 4787 scope.go:117] "RemoveContainer" containerID="8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.876032 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c7df8"] Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.893371 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrm5q"] Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.911118 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rrm5q"] Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.975816 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-r6fvq"] Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989018 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="registry-server" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989059 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="registry-server" Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989088 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="extract-content" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989095 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="extract-content" Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989113 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa3f89e-2ba2-46b5-b87b-7d572971a173" containerName="configure-network-openstack-openstack-cell1" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989123 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa3f89e-2ba2-46b5-b87b-7d572971a173" containerName="configure-network-openstack-openstack-cell1" Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989169 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="extract-utilities" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989176 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="extract-utilities" Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989199 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="extract-content" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989205 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="extract-content" Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989232 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="registry-server" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989238 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="registry-server" Jan 26 19:47:04 crc kubenswrapper[4787]: E0126 19:47:04.989266 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="extract-utilities" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.989272 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="extract-utilities" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.990743 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" containerName="registry-server" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.990789 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa3f89e-2ba2-46b5-b87b-7d572971a173" containerName="configure-network-openstack-openstack-cell1" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.990813 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="92020097-96c4-494f-a447-df16645e2d7a" containerName="registry-server" Jan 26 19:47:04 crc kubenswrapper[4787]: I0126 19:47:04.992581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.008700 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.008859 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.009085 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.008921 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.014841 4787 scope.go:117] "RemoveContainer" containerID="afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.044456 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-r6fvq"] Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.062638 4787 scope.go:117] "RemoveContainer" containerID="8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544" Jan 26 19:47:05 crc kubenswrapper[4787]: E0126 19:47:05.063365 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544\": container with ID starting with 8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544 not found: ID does not exist" containerID="8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.063415 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544"} err="failed to get container status \"8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544\": rpc error: code = NotFound desc = could not find container \"8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544\": container with ID starting with 8f579a948cb545fde4709098096ca74a3076fa5a3ad5b45028519baeaf793544 not found: ID does not exist" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.063445 4787 scope.go:117] "RemoveContainer" containerID="8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725" Jan 26 19:47:05 crc kubenswrapper[4787]: E0126 19:47:05.063814 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725\": container with ID starting with 8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725 not found: ID does not exist" containerID="8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.063854 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725"} err="failed to get container status \"8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725\": rpc error: code = NotFound desc = could not find container \"8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725\": container with ID starting with 8d0a3f5d8ff8396096c1c956e21f84f518f0524fd1514df5afeed2d2dbab7725 not found: ID does not exist" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.063877 4787 scope.go:117] "RemoveContainer" containerID="afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49" Jan 26 19:47:05 crc kubenswrapper[4787]: E0126 19:47:05.064138 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49\": container with ID starting with afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49 not found: ID does not exist" containerID="afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.064160 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49"} err="failed to get container status \"afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49\": rpc error: code = NotFound desc = could not find container \"afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49\": container with ID starting with afc12f81dfe0e53c25ce56e14f51897917ded260c0a8e26270c443e5202b1a49 not found: ID does not exist" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.064174 4787 scope.go:117] "RemoveContainer" containerID="b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.082176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cmtd\" (UniqueName: \"kubernetes.io/projected/172ce100-893e-420a-bb76-beda7ab879db-kube-api-access-9cmtd\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.082283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.082392 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-inventory\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.082438 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ceph\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.105375 4787 scope.go:117] "RemoveContainer" containerID="ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.138480 4787 scope.go:117] "RemoveContainer" containerID="29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.183050 4787 scope.go:117] "RemoveContainer" containerID="b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98" Jan 26 19:47:05 crc kubenswrapper[4787]: E0126 19:47:05.184154 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98\": container with ID starting with b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98 not found: ID does not exist" containerID="b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.184234 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ceph\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.184242 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98"} err="failed to get container status \"b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98\": rpc error: code = NotFound desc = could not find container \"b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98\": container with ID starting with b4469e1f59426e572e1cfe3f0368d9977978319b2f57f09dcadf653b6f9e3e98 not found: ID does not exist" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.184303 4787 scope.go:117] "RemoveContainer" containerID="ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.184306 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cmtd\" (UniqueName: \"kubernetes.io/projected/172ce100-893e-420a-bb76-beda7ab879db-kube-api-access-9cmtd\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.184425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.184561 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-inventory\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: E0126 19:47:05.185208 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0\": container with ID starting with ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0 not found: ID does not exist" containerID="ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.185241 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0"} err="failed to get container status \"ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0\": rpc error: code = NotFound desc = could not find container \"ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0\": container with ID starting with ca4e5e62724da1c5e509f102ef255ecd30ec472a4be37db3ea167678a3ba7ea0 not found: ID does not exist" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.185283 4787 scope.go:117] "RemoveContainer" containerID="29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb" Jan 26 19:47:05 crc kubenswrapper[4787]: E0126 19:47:05.186415 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb\": container with ID starting with 29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb not found: ID does not exist" containerID="29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.186486 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb"} err="failed to get container status \"29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb\": rpc error: code = NotFound desc = could not find container \"29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb\": container with ID starting with 29d2967e629d569bc1f0af825652e83f7ae5a1cd6875b506c2761cd19e5660bb not found: ID does not exist" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.191122 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.191740 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ceph\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.192155 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-inventory\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.203687 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cmtd\" (UniqueName: \"kubernetes.io/projected/172ce100-893e-420a-bb76-beda7ab879db-kube-api-access-9cmtd\") pod \"validate-network-openstack-openstack-cell1-r6fvq\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.318615 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.603707 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b459fb-c72c-4a65-bd84-65e43283ad69" path="/var/lib/kubelet/pods/38b459fb-c72c-4a65-bd84-65e43283ad69/volumes" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.604656 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92020097-96c4-494f-a447-df16645e2d7a" path="/var/lib/kubelet/pods/92020097-96c4-494f-a447-df16645e2d7a/volumes" Jan 26 19:47:05 crc kubenswrapper[4787]: I0126 19:47:05.860611 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-r6fvq"] Jan 26 19:47:06 crc kubenswrapper[4787]: I0126 19:47:06.848551 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" event={"ID":"172ce100-893e-420a-bb76-beda7ab879db","Type":"ContainerStarted","Data":"707f20e92e310e8d0f18fa22b8d7ee12944a4f17bee6cedf00bc37227e7cd960"} Jan 26 19:47:06 crc kubenswrapper[4787]: I0126 19:47:06.849169 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" event={"ID":"172ce100-893e-420a-bb76-beda7ab879db","Type":"ContainerStarted","Data":"55a90b7ec8987810bc70c80c6c484a83d5b201a4aa07e4b1bca9426e2b979cc2"} Jan 26 19:47:06 crc kubenswrapper[4787]: I0126 19:47:06.868737 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" podStartSLOduration=2.3779247740000002 podStartE2EDuration="2.868713316s" podCreationTimestamp="2026-01-26 19:47:04 +0000 UTC" firstStartedPulling="2026-01-26 19:47:05.877125406 +0000 UTC m=+7394.584261539" lastFinishedPulling="2026-01-26 19:47:06.367913948 +0000 UTC m=+7395.075050081" observedRunningTime="2026-01-26 19:47:06.865140198 +0000 UTC m=+7395.572276351" watchObservedRunningTime="2026-01-26 19:47:06.868713316 +0000 UTC m=+7395.575849449" Jan 26 19:47:10 crc kubenswrapper[4787]: I0126 19:47:10.124494 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:47:10 crc kubenswrapper[4787]: I0126 19:47:10.176452 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:47:11 crc kubenswrapper[4787]: I0126 19:47:11.528884 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjgjs"] Jan 26 19:47:11 crc kubenswrapper[4787]: I0126 19:47:11.892040 4787 generic.go:334] "Generic (PLEG): container finished" podID="172ce100-893e-420a-bb76-beda7ab879db" containerID="707f20e92e310e8d0f18fa22b8d7ee12944a4f17bee6cedf00bc37227e7cd960" exitCode=0 Jan 26 19:47:11 crc kubenswrapper[4787]: I0126 19:47:11.892137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" event={"ID":"172ce100-893e-420a-bb76-beda7ab879db","Type":"ContainerDied","Data":"707f20e92e310e8d0f18fa22b8d7ee12944a4f17bee6cedf00bc37227e7cd960"} Jan 26 19:47:11 crc kubenswrapper[4787]: I0126 19:47:11.892289 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pjgjs" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="registry-server" containerID="cri-o://e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3" gracePeriod=2 Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.403912 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.548247 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-utilities\") pod \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.548352 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b27sc\" (UniqueName: \"kubernetes.io/projected/0b406b51-5f61-4549-a7cd-cd39a853c4dd-kube-api-access-b27sc\") pod \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.548598 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-catalog-content\") pod \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\" (UID: \"0b406b51-5f61-4549-a7cd-cd39a853c4dd\") " Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.550649 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-utilities" (OuterVolumeSpecName: "utilities") pod "0b406b51-5f61-4549-a7cd-cd39a853c4dd" (UID: "0b406b51-5f61-4549-a7cd-cd39a853c4dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.555854 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b406b51-5f61-4549-a7cd-cd39a853c4dd-kube-api-access-b27sc" (OuterVolumeSpecName: "kube-api-access-b27sc") pod "0b406b51-5f61-4549-a7cd-cd39a853c4dd" (UID: "0b406b51-5f61-4549-a7cd-cd39a853c4dd"). InnerVolumeSpecName "kube-api-access-b27sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.653314 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.653665 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b27sc\" (UniqueName: \"kubernetes.io/projected/0b406b51-5f61-4549-a7cd-cd39a853c4dd-kube-api-access-b27sc\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.689616 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b406b51-5f61-4549-a7cd-cd39a853c4dd" (UID: "0b406b51-5f61-4549-a7cd-cd39a853c4dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.756006 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b406b51-5f61-4549-a7cd-cd39a853c4dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.905669 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerID="e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3" exitCode=0 Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.905725 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjgjs" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.905780 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerDied","Data":"e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3"} Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.905810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjgjs" event={"ID":"0b406b51-5f61-4549-a7cd-cd39a853c4dd","Type":"ContainerDied","Data":"e5e7f0c5410f5d4ead77873ef6c9dd29fb8e708460d162c57d1128ca7ea7163f"} Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.905831 4787 scope.go:117] "RemoveContainer" containerID="e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.959718 4787 scope.go:117] "RemoveContainer" containerID="6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de" Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.974305 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjgjs"] Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.983714 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pjgjs"] Jan 26 19:47:12 crc kubenswrapper[4787]: I0126 19:47:12.994457 4787 scope.go:117] "RemoveContainer" containerID="b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.042880 4787 scope.go:117] "RemoveContainer" containerID="e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3" Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.043563 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3\": container with ID starting with e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3 not found: ID does not exist" containerID="e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.043595 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3"} err="failed to get container status \"e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3\": rpc error: code = NotFound desc = could not find container \"e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3\": container with ID starting with e710eb0663ed6d12c5004563ff6d797580c90f53b5b462fa2837aa442bc50db3 not found: ID does not exist" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.043615 4787 scope.go:117] "RemoveContainer" containerID="6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de" Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.044797 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de\": container with ID starting with 6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de not found: ID does not exist" containerID="6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.044821 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de"} err="failed to get container status \"6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de\": rpc error: code = NotFound desc = could not find container \"6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de\": container with ID starting with 6861d893e12a1029138a7b9e8ffdfd99095f6b2d40003898220862428f5d94de not found: ID does not exist" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.044838 4787 scope.go:117] "RemoveContainer" containerID="b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495" Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.045163 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495\": container with ID starting with b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495 not found: ID does not exist" containerID="b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.045185 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495"} err="failed to get container status \"b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495\": rpc error: code = NotFound desc = could not find container \"b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495\": container with ID starting with b731810bc1c75a218963383eb74b262f38c3745ade0aa152c23065affb4dd495 not found: ID does not exist" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.477531 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.579887 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-inventory\") pod \"172ce100-893e-420a-bb76-beda7ab879db\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.579980 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ssh-key-openstack-cell1\") pod \"172ce100-893e-420a-bb76-beda7ab879db\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.580149 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cmtd\" (UniqueName: \"kubernetes.io/projected/172ce100-893e-420a-bb76-beda7ab879db-kube-api-access-9cmtd\") pod \"172ce100-893e-420a-bb76-beda7ab879db\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.580181 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ceph\") pod \"172ce100-893e-420a-bb76-beda7ab879db\" (UID: \"172ce100-893e-420a-bb76-beda7ab879db\") " Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.585962 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ceph" (OuterVolumeSpecName: "ceph") pod "172ce100-893e-420a-bb76-beda7ab879db" (UID: "172ce100-893e-420a-bb76-beda7ab879db"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.586283 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172ce100-893e-420a-bb76-beda7ab879db-kube-api-access-9cmtd" (OuterVolumeSpecName: "kube-api-access-9cmtd") pod "172ce100-893e-420a-bb76-beda7ab879db" (UID: "172ce100-893e-420a-bb76-beda7ab879db"). InnerVolumeSpecName "kube-api-access-9cmtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.602747 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" path="/var/lib/kubelet/pods/0b406b51-5f61-4549-a7cd-cd39a853c4dd/volumes" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.616264 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-inventory" (OuterVolumeSpecName: "inventory") pod "172ce100-893e-420a-bb76-beda7ab879db" (UID: "172ce100-893e-420a-bb76-beda7ab879db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.618640 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "172ce100-893e-420a-bb76-beda7ab879db" (UID: "172ce100-893e-420a-bb76-beda7ab879db"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.682856 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cmtd\" (UniqueName: \"kubernetes.io/projected/172ce100-893e-420a-bb76-beda7ab879db-kube-api-access-9cmtd\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.682910 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.682925 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.682939 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/172ce100-893e-420a-bb76-beda7ab879db-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.919167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" event={"ID":"172ce100-893e-420a-bb76-beda7ab879db","Type":"ContainerDied","Data":"55a90b7ec8987810bc70c80c6c484a83d5b201a4aa07e4b1bca9426e2b979cc2"} Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.919636 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a90b7ec8987810bc70c80c6c484a83d5b201a4aa07e4b1bca9426e2b979cc2" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.919242 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-r6fvq" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995131 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-dt69h"] Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.995570 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="extract-utilities" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995586 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="extract-utilities" Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.995599 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="extract-content" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995605 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="extract-content" Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.995618 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172ce100-893e-420a-bb76-beda7ab879db" containerName="validate-network-openstack-openstack-cell1" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995625 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="172ce100-893e-420a-bb76-beda7ab879db" containerName="validate-network-openstack-openstack-cell1" Jan 26 19:47:13 crc kubenswrapper[4787]: E0126 19:47:13.995638 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="registry-server" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995645 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="registry-server" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995853 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="172ce100-893e-420a-bb76-beda7ab879db" containerName="validate-network-openstack-openstack-cell1" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.995865 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b406b51-5f61-4549-a7cd-cd39a853c4dd" containerName="registry-server" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.996579 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.998895 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.999218 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:47:13 crc kubenswrapper[4787]: I0126 19:47:13.999779 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.000419 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.021476 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-dt69h"] Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.092208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ceph\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.092331 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-inventory\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.092391 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.093236 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrw7p\" (UniqueName: \"kubernetes.io/projected/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-kube-api-access-lrw7p\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.195126 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrw7p\" (UniqueName: \"kubernetes.io/projected/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-kube-api-access-lrw7p\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.195270 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ceph\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.195304 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-inventory\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.195341 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.200095 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-inventory\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.200157 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.200544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ceph\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.222825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrw7p\" (UniqueName: \"kubernetes.io/projected/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-kube-api-access-lrw7p\") pod \"install-os-openstack-openstack-cell1-dt69h\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.319718 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.881197 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-dt69h"] Jan 26 19:47:14 crc kubenswrapper[4787]: I0126 19:47:14.935199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-dt69h" event={"ID":"fcb3f748-1ccd-4119-b32b-6cab1a5ef232","Type":"ContainerStarted","Data":"93bf1d6baf987d2293ebddb9b3f4ad482990346afe79fb0047bb5c1a55c3d50c"} Jan 26 19:47:15 crc kubenswrapper[4787]: I0126 19:47:15.962637 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-dt69h" event={"ID":"fcb3f748-1ccd-4119-b32b-6cab1a5ef232","Type":"ContainerStarted","Data":"1306da2503304252a8969a7a8aaf71eb3a7bbdf8a67033e2f627750cdee65143"} Jan 26 19:47:15 crc kubenswrapper[4787]: I0126 19:47:15.984719 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-dt69h" podStartSLOduration=2.582663669 podStartE2EDuration="2.984701848s" podCreationTimestamp="2026-01-26 19:47:13 +0000 UTC" firstStartedPulling="2026-01-26 19:47:14.880976779 +0000 UTC m=+7403.588112922" lastFinishedPulling="2026-01-26 19:47:15.283014968 +0000 UTC m=+7403.990151101" observedRunningTime="2026-01-26 19:47:15.981329674 +0000 UTC m=+7404.688465827" watchObservedRunningTime="2026-01-26 19:47:15.984701848 +0000 UTC m=+7404.691837981" Jan 26 19:47:16 crc kubenswrapper[4787]: I0126 19:47:16.808509 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:47:16 crc kubenswrapper[4787]: I0126 19:47:16.808757 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:47:46 crc kubenswrapper[4787]: I0126 19:47:46.808380 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:47:46 crc kubenswrapper[4787]: I0126 19:47:46.808965 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:47:46 crc kubenswrapper[4787]: I0126 19:47:46.809009 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:47:46 crc kubenswrapper[4787]: I0126 19:47:46.809767 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:47:46 crc kubenswrapper[4787]: I0126 19:47:46.809820 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" gracePeriod=600 Jan 26 19:47:46 crc kubenswrapper[4787]: E0126 19:47:46.973463 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:47:47 crc kubenswrapper[4787]: I0126 19:47:47.252453 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" exitCode=0 Jan 26 19:47:47 crc kubenswrapper[4787]: I0126 19:47:47.252491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920"} Jan 26 19:47:47 crc kubenswrapper[4787]: I0126 19:47:47.252522 4787 scope.go:117] "RemoveContainer" containerID="d6c7d4eb104591d8bce593f41cd34764d38005a94c22d20c3aeb66e411092dcb" Jan 26 19:47:47 crc kubenswrapper[4787]: I0126 19:47:47.253230 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:47:47 crc kubenswrapper[4787]: E0126 19:47:47.253553 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:48:01 crc kubenswrapper[4787]: I0126 19:48:01.414452 4787 generic.go:334] "Generic (PLEG): container finished" podID="fcb3f748-1ccd-4119-b32b-6cab1a5ef232" containerID="1306da2503304252a8969a7a8aaf71eb3a7bbdf8a67033e2f627750cdee65143" exitCode=0 Jan 26 19:48:01 crc kubenswrapper[4787]: I0126 19:48:01.414554 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-dt69h" event={"ID":"fcb3f748-1ccd-4119-b32b-6cab1a5ef232","Type":"ContainerDied","Data":"1306da2503304252a8969a7a8aaf71eb3a7bbdf8a67033e2f627750cdee65143"} Jan 26 19:48:01 crc kubenswrapper[4787]: I0126 19:48:01.598148 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:48:01 crc kubenswrapper[4787]: E0126 19:48:01.598708 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:48:02 crc kubenswrapper[4787]: I0126 19:48:02.872619 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:48:02 crc kubenswrapper[4787]: I0126 19:48:02.993291 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ssh-key-openstack-cell1\") pod \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " Jan 26 19:48:02 crc kubenswrapper[4787]: I0126 19:48:02.993528 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-inventory\") pod \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " Jan 26 19:48:02 crc kubenswrapper[4787]: I0126 19:48:02.993598 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrw7p\" (UniqueName: \"kubernetes.io/projected/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-kube-api-access-lrw7p\") pod \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " Jan 26 19:48:02 crc kubenswrapper[4787]: I0126 19:48:02.993671 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ceph\") pod \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\" (UID: \"fcb3f748-1ccd-4119-b32b-6cab1a5ef232\") " Jan 26 19:48:02 crc kubenswrapper[4787]: I0126 19:48:02.999452 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ceph" (OuterVolumeSpecName: "ceph") pod "fcb3f748-1ccd-4119-b32b-6cab1a5ef232" (UID: "fcb3f748-1ccd-4119-b32b-6cab1a5ef232"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.000803 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-kube-api-access-lrw7p" (OuterVolumeSpecName: "kube-api-access-lrw7p") pod "fcb3f748-1ccd-4119-b32b-6cab1a5ef232" (UID: "fcb3f748-1ccd-4119-b32b-6cab1a5ef232"). InnerVolumeSpecName "kube-api-access-lrw7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.026747 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-inventory" (OuterVolumeSpecName: "inventory") pod "fcb3f748-1ccd-4119-b32b-6cab1a5ef232" (UID: "fcb3f748-1ccd-4119-b32b-6cab1a5ef232"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.031102 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "fcb3f748-1ccd-4119-b32b-6cab1a5ef232" (UID: "fcb3f748-1ccd-4119-b32b-6cab1a5ef232"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.096023 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.096300 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrw7p\" (UniqueName: \"kubernetes.io/projected/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-kube-api-access-lrw7p\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.096376 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.096446 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcb3f748-1ccd-4119-b32b-6cab1a5ef232-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.435423 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-dt69h" event={"ID":"fcb3f748-1ccd-4119-b32b-6cab1a5ef232","Type":"ContainerDied","Data":"93bf1d6baf987d2293ebddb9b3f4ad482990346afe79fb0047bb5c1a55c3d50c"} Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.435736 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93bf1d6baf987d2293ebddb9b3f4ad482990346afe79fb0047bb5c1a55c3d50c" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.435480 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-dt69h" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.517389 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-t6gm5"] Jan 26 19:48:03 crc kubenswrapper[4787]: E0126 19:48:03.517854 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb3f748-1ccd-4119-b32b-6cab1a5ef232" containerName="install-os-openstack-openstack-cell1" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.517877 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb3f748-1ccd-4119-b32b-6cab1a5ef232" containerName="install-os-openstack-openstack-cell1" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.518120 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb3f748-1ccd-4119-b32b-6cab1a5ef232" containerName="install-os-openstack-openstack-cell1" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.518821 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.523609 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.523834 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.523716 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.524473 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.535174 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-t6gm5"] Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.708536 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-inventory\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.708615 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.708662 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ceph\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.708696 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457hl\" (UniqueName: \"kubernetes.io/projected/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-kube-api-access-457hl\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.811201 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-inventory\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.811266 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.811309 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ceph\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.811345 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-457hl\" (UniqueName: \"kubernetes.io/projected/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-kube-api-access-457hl\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.814931 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-inventory\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.815135 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ceph\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.815265 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.832008 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-457hl\" (UniqueName: \"kubernetes.io/projected/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-kube-api-access-457hl\") pod \"configure-os-openstack-openstack-cell1-t6gm5\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:03 crc kubenswrapper[4787]: I0126 19:48:03.838884 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:04 crc kubenswrapper[4787]: W0126 19:48:04.530071 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa9d3cb7_a099_4c24_aa56_3f74900d35fd.slice/crio-3d668e371dbcddf2e61fad5987f5a2716a48ee8e3d40ae002f6dc10204dcf612 WatchSource:0}: Error finding container 3d668e371dbcddf2e61fad5987f5a2716a48ee8e3d40ae002f6dc10204dcf612: Status 404 returned error can't find the container with id 3d668e371dbcddf2e61fad5987f5a2716a48ee8e3d40ae002f6dc10204dcf612 Jan 26 19:48:04 crc kubenswrapper[4787]: I0126 19:48:04.532246 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-t6gm5"] Jan 26 19:48:05 crc kubenswrapper[4787]: I0126 19:48:05.462353 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" event={"ID":"fa9d3cb7-a099-4c24-aa56-3f74900d35fd","Type":"ContainerStarted","Data":"0b9912946b7df5b28f4832c3fdefebed5ed0ef8bd34decad6f8295cc2b692d02"} Jan 26 19:48:05 crc kubenswrapper[4787]: I0126 19:48:05.462672 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" event={"ID":"fa9d3cb7-a099-4c24-aa56-3f74900d35fd","Type":"ContainerStarted","Data":"3d668e371dbcddf2e61fad5987f5a2716a48ee8e3d40ae002f6dc10204dcf612"} Jan 26 19:48:05 crc kubenswrapper[4787]: I0126 19:48:05.493032 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" podStartSLOduration=1.972134955 podStartE2EDuration="2.493011555s" podCreationTimestamp="2026-01-26 19:48:03 +0000 UTC" firstStartedPulling="2026-01-26 19:48:04.534019632 +0000 UTC m=+7453.241155765" lastFinishedPulling="2026-01-26 19:48:05.054896232 +0000 UTC m=+7453.762032365" observedRunningTime="2026-01-26 19:48:05.487460059 +0000 UTC m=+7454.194596192" watchObservedRunningTime="2026-01-26 19:48:05.493011555 +0000 UTC m=+7454.200147688" Jan 26 19:48:16 crc kubenswrapper[4787]: I0126 19:48:16.590183 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:48:16 crc kubenswrapper[4787]: E0126 19:48:16.592654 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:48:31 crc kubenswrapper[4787]: I0126 19:48:31.589349 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:48:31 crc kubenswrapper[4787]: E0126 19:48:31.590221 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:48:46 crc kubenswrapper[4787]: I0126 19:48:46.589752 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:48:46 crc kubenswrapper[4787]: E0126 19:48:46.590535 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:48:47 crc kubenswrapper[4787]: I0126 19:48:47.895911 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa9d3cb7-a099-4c24-aa56-3f74900d35fd" containerID="0b9912946b7df5b28f4832c3fdefebed5ed0ef8bd34decad6f8295cc2b692d02" exitCode=0 Jan 26 19:48:47 crc kubenswrapper[4787]: I0126 19:48:47.895985 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" event={"ID":"fa9d3cb7-a099-4c24-aa56-3f74900d35fd","Type":"ContainerDied","Data":"0b9912946b7df5b28f4832c3fdefebed5ed0ef8bd34decad6f8295cc2b692d02"} Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.375685 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.526546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ceph\") pod \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.527030 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-inventory\") pod \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.527231 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-457hl\" (UniqueName: \"kubernetes.io/projected/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-kube-api-access-457hl\") pod \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.527349 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ssh-key-openstack-cell1\") pod \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\" (UID: \"fa9d3cb7-a099-4c24-aa56-3f74900d35fd\") " Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.532195 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ceph" (OuterVolumeSpecName: "ceph") pod "fa9d3cb7-a099-4c24-aa56-3f74900d35fd" (UID: "fa9d3cb7-a099-4c24-aa56-3f74900d35fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.532654 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-kube-api-access-457hl" (OuterVolumeSpecName: "kube-api-access-457hl") pod "fa9d3cb7-a099-4c24-aa56-3f74900d35fd" (UID: "fa9d3cb7-a099-4c24-aa56-3f74900d35fd"). InnerVolumeSpecName "kube-api-access-457hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.564185 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-inventory" (OuterVolumeSpecName: "inventory") pod "fa9d3cb7-a099-4c24-aa56-3f74900d35fd" (UID: "fa9d3cb7-a099-4c24-aa56-3f74900d35fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.568722 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "fa9d3cb7-a099-4c24-aa56-3f74900d35fd" (UID: "fa9d3cb7-a099-4c24-aa56-3f74900d35fd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.631342 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.631392 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.631406 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-457hl\" (UniqueName: \"kubernetes.io/projected/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-kube-api-access-457hl\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.631417 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d3cb7-a099-4c24-aa56-3f74900d35fd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.916903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" event={"ID":"fa9d3cb7-a099-4c24-aa56-3f74900d35fd","Type":"ContainerDied","Data":"3d668e371dbcddf2e61fad5987f5a2716a48ee8e3d40ae002f6dc10204dcf612"} Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.916966 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d668e371dbcddf2e61fad5987f5a2716a48ee8e3d40ae002f6dc10204dcf612" Jan 26 19:48:49 crc kubenswrapper[4787]: I0126 19:48:49.917001 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t6gm5" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.007311 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-nb2mz"] Jan 26 19:48:50 crc kubenswrapper[4787]: E0126 19:48:50.007751 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9d3cb7-a099-4c24-aa56-3f74900d35fd" containerName="configure-os-openstack-openstack-cell1" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.007771 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9d3cb7-a099-4c24-aa56-3f74900d35fd" containerName="configure-os-openstack-openstack-cell1" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.007999 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9d3cb7-a099-4c24-aa56-3f74900d35fd" containerName="configure-os-openstack-openstack-cell1" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.008734 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.011299 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.011649 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.011804 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.015618 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.022461 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-nb2mz"] Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.142170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qszh\" (UniqueName: \"kubernetes.io/projected/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-kube-api-access-9qszh\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.142226 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.142609 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ceph\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.142674 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-inventory-0\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.244831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.245030 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ceph\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.245064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-inventory-0\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.245202 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qszh\" (UniqueName: \"kubernetes.io/projected/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-kube-api-access-9qszh\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.249999 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.250472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-inventory-0\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.252837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ceph\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.267103 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qszh\" (UniqueName: \"kubernetes.io/projected/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-kube-api-access-9qszh\") pod \"ssh-known-hosts-openstack-nb2mz\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.327594 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:48:50 crc kubenswrapper[4787]: W0126 19:48:50.878778 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb23c8e5_45c8_466b_8ab8_0350d707ee6b.slice/crio-d39b11196e5c9853b1cfd4fec294aaba8c5b56e96e324474f49d970b7d220792 WatchSource:0}: Error finding container d39b11196e5c9853b1cfd4fec294aaba8c5b56e96e324474f49d970b7d220792: Status 404 returned error can't find the container with id d39b11196e5c9853b1cfd4fec294aaba8c5b56e96e324474f49d970b7d220792 Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.884198 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-nb2mz"] Jan 26 19:48:50 crc kubenswrapper[4787]: I0126 19:48:50.927571 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb2mz" event={"ID":"bb23c8e5-45c8-466b-8ab8-0350d707ee6b","Type":"ContainerStarted","Data":"d39b11196e5c9853b1cfd4fec294aaba8c5b56e96e324474f49d970b7d220792"} Jan 26 19:48:51 crc kubenswrapper[4787]: I0126 19:48:51.941997 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb2mz" event={"ID":"bb23c8e5-45c8-466b-8ab8-0350d707ee6b","Type":"ContainerStarted","Data":"1315dff7e5e19ae0e70fa598517f869eaf8a6f867e4caac7e7ad86beb340ed31"} Jan 26 19:48:51 crc kubenswrapper[4787]: I0126 19:48:51.960011 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-nb2mz" podStartSLOduration=2.425445293 podStartE2EDuration="2.959990497s" podCreationTimestamp="2026-01-26 19:48:49 +0000 UTC" firstStartedPulling="2026-01-26 19:48:50.881056668 +0000 UTC m=+7499.588192801" lastFinishedPulling="2026-01-26 19:48:51.415601842 +0000 UTC m=+7500.122738005" observedRunningTime="2026-01-26 19:48:51.957059205 +0000 UTC m=+7500.664195338" watchObservedRunningTime="2026-01-26 19:48:51.959990497 +0000 UTC m=+7500.667126650" Jan 26 19:48:57 crc kubenswrapper[4787]: I0126 19:48:57.591242 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:48:57 crc kubenswrapper[4787]: E0126 19:48:57.592619 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:49:00 crc kubenswrapper[4787]: I0126 19:49:00.020450 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb23c8e5-45c8-466b-8ab8-0350d707ee6b" containerID="1315dff7e5e19ae0e70fa598517f869eaf8a6f867e4caac7e7ad86beb340ed31" exitCode=0 Jan 26 19:49:00 crc kubenswrapper[4787]: I0126 19:49:00.020657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb2mz" event={"ID":"bb23c8e5-45c8-466b-8ab8-0350d707ee6b","Type":"ContainerDied","Data":"1315dff7e5e19ae0e70fa598517f869eaf8a6f867e4caac7e7ad86beb340ed31"} Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.487262 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.604674 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-inventory-0\") pod \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.604776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ceph\") pod \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.605011 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ssh-key-openstack-cell1\") pod \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.605230 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qszh\" (UniqueName: \"kubernetes.io/projected/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-kube-api-access-9qszh\") pod \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\" (UID: \"bb23c8e5-45c8-466b-8ab8-0350d707ee6b\") " Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.612838 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ceph" (OuterVolumeSpecName: "ceph") pod "bb23c8e5-45c8-466b-8ab8-0350d707ee6b" (UID: "bb23c8e5-45c8-466b-8ab8-0350d707ee6b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.613013 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-kube-api-access-9qszh" (OuterVolumeSpecName: "kube-api-access-9qszh") pod "bb23c8e5-45c8-466b-8ab8-0350d707ee6b" (UID: "bb23c8e5-45c8-466b-8ab8-0350d707ee6b"). InnerVolumeSpecName "kube-api-access-9qszh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.638136 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "bb23c8e5-45c8-466b-8ab8-0350d707ee6b" (UID: "bb23c8e5-45c8-466b-8ab8-0350d707ee6b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.645125 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "bb23c8e5-45c8-466b-8ab8-0350d707ee6b" (UID: "bb23c8e5-45c8-466b-8ab8-0350d707ee6b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.708340 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qszh\" (UniqueName: \"kubernetes.io/projected/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-kube-api-access-9qszh\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.708383 4787 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.708395 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:01 crc kubenswrapper[4787]: I0126 19:49:01.708406 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bb23c8e5-45c8-466b-8ab8-0350d707ee6b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.042018 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb2mz" event={"ID":"bb23c8e5-45c8-466b-8ab8-0350d707ee6b","Type":"ContainerDied","Data":"d39b11196e5c9853b1cfd4fec294aaba8c5b56e96e324474f49d970b7d220792"} Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.042060 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39b11196e5c9853b1cfd4fec294aaba8c5b56e96e324474f49d970b7d220792" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.042084 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb2mz" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.141258 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-84685"] Jan 26 19:49:02 crc kubenswrapper[4787]: E0126 19:49:02.142216 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb23c8e5-45c8-466b-8ab8-0350d707ee6b" containerName="ssh-known-hosts-openstack" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.142347 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb23c8e5-45c8-466b-8ab8-0350d707ee6b" containerName="ssh-known-hosts-openstack" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.142894 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb23c8e5-45c8-466b-8ab8-0350d707ee6b" containerName="ssh-known-hosts-openstack" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.144261 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.148184 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.148383 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.148598 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.148772 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.151922 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-84685"] Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.320273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ceph\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.320637 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.320901 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-inventory\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.321056 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j558n\" (UniqueName: \"kubernetes.io/projected/c7311cf8-1525-446e-9902-3468838d7968-kube-api-access-j558n\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.423273 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ceph\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.423359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.423472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-inventory\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.423492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j558n\" (UniqueName: \"kubernetes.io/projected/c7311cf8-1525-446e-9902-3468838d7968-kube-api-access-j558n\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.428756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ceph\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.429434 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-inventory\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.430693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.444542 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j558n\" (UniqueName: \"kubernetes.io/projected/c7311cf8-1525-446e-9902-3468838d7968-kube-api-access-j558n\") pod \"run-os-openstack-openstack-cell1-84685\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:02 crc kubenswrapper[4787]: I0126 19:49:02.467553 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:03 crc kubenswrapper[4787]: I0126 19:49:03.001362 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-84685"] Jan 26 19:49:03 crc kubenswrapper[4787]: I0126 19:49:03.050820 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-84685" event={"ID":"c7311cf8-1525-446e-9902-3468838d7968","Type":"ContainerStarted","Data":"d084fff8d22458785c691e41a6b51c2aeec924a861bf67ba360506c41608447f"} Jan 26 19:49:04 crc kubenswrapper[4787]: I0126 19:49:04.072656 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-84685" event={"ID":"c7311cf8-1525-446e-9902-3468838d7968","Type":"ContainerStarted","Data":"1de531bfa74f7c1fff395bd4b9ac7f9c2e49a5ecbf8bb2fe08f79aa42e8da828"} Jan 26 19:49:04 crc kubenswrapper[4787]: I0126 19:49:04.098759 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-84685" podStartSLOduration=1.670107275 podStartE2EDuration="2.098730356s" podCreationTimestamp="2026-01-26 19:49:02 +0000 UTC" firstStartedPulling="2026-01-26 19:49:02.994517346 +0000 UTC m=+7511.701653479" lastFinishedPulling="2026-01-26 19:49:03.423140427 +0000 UTC m=+7512.130276560" observedRunningTime="2026-01-26 19:49:04.096128052 +0000 UTC m=+7512.803264195" watchObservedRunningTime="2026-01-26 19:49:04.098730356 +0000 UTC m=+7512.805866529" Jan 26 19:49:11 crc kubenswrapper[4787]: I0126 19:49:11.136411 4787 generic.go:334] "Generic (PLEG): container finished" podID="c7311cf8-1525-446e-9902-3468838d7968" containerID="1de531bfa74f7c1fff395bd4b9ac7f9c2e49a5ecbf8bb2fe08f79aa42e8da828" exitCode=0 Jan 26 19:49:11 crc kubenswrapper[4787]: I0126 19:49:11.136600 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-84685" event={"ID":"c7311cf8-1525-446e-9902-3468838d7968","Type":"ContainerDied","Data":"1de531bfa74f7c1fff395bd4b9ac7f9c2e49a5ecbf8bb2fe08f79aa42e8da828"} Jan 26 19:49:11 crc kubenswrapper[4787]: I0126 19:49:11.597333 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:49:11 crc kubenswrapper[4787]: E0126 19:49:11.597980 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.586999 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.754544 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ceph\") pod \"c7311cf8-1525-446e-9902-3468838d7968\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.754823 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j558n\" (UniqueName: \"kubernetes.io/projected/c7311cf8-1525-446e-9902-3468838d7968-kube-api-access-j558n\") pod \"c7311cf8-1525-446e-9902-3468838d7968\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.754857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ssh-key-openstack-cell1\") pod \"c7311cf8-1525-446e-9902-3468838d7968\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.754931 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-inventory\") pod \"c7311cf8-1525-446e-9902-3468838d7968\" (UID: \"c7311cf8-1525-446e-9902-3468838d7968\") " Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.760565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7311cf8-1525-446e-9902-3468838d7968-kube-api-access-j558n" (OuterVolumeSpecName: "kube-api-access-j558n") pod "c7311cf8-1525-446e-9902-3468838d7968" (UID: "c7311cf8-1525-446e-9902-3468838d7968"). InnerVolumeSpecName "kube-api-access-j558n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.760754 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ceph" (OuterVolumeSpecName: "ceph") pod "c7311cf8-1525-446e-9902-3468838d7968" (UID: "c7311cf8-1525-446e-9902-3468838d7968"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.783779 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-inventory" (OuterVolumeSpecName: "inventory") pod "c7311cf8-1525-446e-9902-3468838d7968" (UID: "c7311cf8-1525-446e-9902-3468838d7968"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.784152 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c7311cf8-1525-446e-9902-3468838d7968" (UID: "c7311cf8-1525-446e-9902-3468838d7968"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.858569 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.858608 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j558n\" (UniqueName: \"kubernetes.io/projected/c7311cf8-1525-446e-9902-3468838d7968-kube-api-access-j558n\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.858619 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:12 crc kubenswrapper[4787]: I0126 19:49:12.858627 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7311cf8-1525-446e-9902-3468838d7968-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.164131 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-84685" event={"ID":"c7311cf8-1525-446e-9902-3468838d7968","Type":"ContainerDied","Data":"d084fff8d22458785c691e41a6b51c2aeec924a861bf67ba360506c41608447f"} Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.164494 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d084fff8d22458785c691e41a6b51c2aeec924a861bf67ba360506c41608447f" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.164185 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-84685" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.276875 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-vgzj2"] Jan 26 19:49:13 crc kubenswrapper[4787]: E0126 19:49:13.278153 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7311cf8-1525-446e-9902-3468838d7968" containerName="run-os-openstack-openstack-cell1" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.278177 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7311cf8-1525-446e-9902-3468838d7968" containerName="run-os-openstack-openstack-cell1" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.278868 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7311cf8-1525-446e-9902-3468838d7968" containerName="run-os-openstack-openstack-cell1" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.279983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.284057 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.284348 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.284538 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.290198 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.308763 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-vgzj2"] Jan 26 19:49:13 crc kubenswrapper[4787]: E0126 19:49:13.359401 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7311cf8_1525_446e_9902_3468838d7968.slice/crio-d084fff8d22458785c691e41a6b51c2aeec924a861bf67ba360506c41608447f\": RecentStats: unable to find data in memory cache]" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.388305 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.388395 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ceph\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.388609 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4clj\" (UniqueName: \"kubernetes.io/projected/521ae3b0-3a63-442a-885c-09a689d344d9-kube-api-access-d4clj\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.388758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-inventory\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.491205 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-inventory\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.491756 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.491905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ceph\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.492016 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4clj\" (UniqueName: \"kubernetes.io/projected/521ae3b0-3a63-442a-885c-09a689d344d9-kube-api-access-d4clj\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.497857 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.497873 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ceph\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.498457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-inventory\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.515301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4clj\" (UniqueName: \"kubernetes.io/projected/521ae3b0-3a63-442a-885c-09a689d344d9-kube-api-access-d4clj\") pod \"reboot-os-openstack-openstack-cell1-vgzj2\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:13 crc kubenswrapper[4787]: I0126 19:49:13.610722 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:14 crc kubenswrapper[4787]: I0126 19:49:14.188301 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-vgzj2"] Jan 26 19:49:15 crc kubenswrapper[4787]: I0126 19:49:15.188500 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" event={"ID":"521ae3b0-3a63-442a-885c-09a689d344d9","Type":"ContainerStarted","Data":"327e77d48077e00c85ee5eb437733990e5dbd31300223ea217953bc44d02f347"} Jan 26 19:49:15 crc kubenswrapper[4787]: I0126 19:49:15.188845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" event={"ID":"521ae3b0-3a63-442a-885c-09a689d344d9","Type":"ContainerStarted","Data":"fa05373c440ccf5f235e9376fe5ff764721a1b19378aa9650187d8dc1586d551"} Jan 26 19:49:15 crc kubenswrapper[4787]: I0126 19:49:15.227752 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" podStartSLOduration=1.8165503379999999 podStartE2EDuration="2.22772648s" podCreationTimestamp="2026-01-26 19:49:13 +0000 UTC" firstStartedPulling="2026-01-26 19:49:14.191839004 +0000 UTC m=+7522.898975137" lastFinishedPulling="2026-01-26 19:49:14.603015146 +0000 UTC m=+7523.310151279" observedRunningTime="2026-01-26 19:49:15.214995757 +0000 UTC m=+7523.922131900" watchObservedRunningTime="2026-01-26 19:49:15.22772648 +0000 UTC m=+7523.934862613" Jan 26 19:49:24 crc kubenswrapper[4787]: I0126 19:49:24.589372 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:49:24 crc kubenswrapper[4787]: E0126 19:49:24.590156 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:49:30 crc kubenswrapper[4787]: I0126 19:49:30.339233 4787 generic.go:334] "Generic (PLEG): container finished" podID="521ae3b0-3a63-442a-885c-09a689d344d9" containerID="327e77d48077e00c85ee5eb437733990e5dbd31300223ea217953bc44d02f347" exitCode=0 Jan 26 19:49:30 crc kubenswrapper[4787]: I0126 19:49:30.339327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" event={"ID":"521ae3b0-3a63-442a-885c-09a689d344d9","Type":"ContainerDied","Data":"327e77d48077e00c85ee5eb437733990e5dbd31300223ea217953bc44d02f347"} Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.834923 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.910651 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-inventory\") pod \"521ae3b0-3a63-442a-885c-09a689d344d9\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.910754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ceph\") pod \"521ae3b0-3a63-442a-885c-09a689d344d9\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.910805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ssh-key-openstack-cell1\") pod \"521ae3b0-3a63-442a-885c-09a689d344d9\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.911023 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4clj\" (UniqueName: \"kubernetes.io/projected/521ae3b0-3a63-442a-885c-09a689d344d9-kube-api-access-d4clj\") pod \"521ae3b0-3a63-442a-885c-09a689d344d9\" (UID: \"521ae3b0-3a63-442a-885c-09a689d344d9\") " Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.916560 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ceph" (OuterVolumeSpecName: "ceph") pod "521ae3b0-3a63-442a-885c-09a689d344d9" (UID: "521ae3b0-3a63-442a-885c-09a689d344d9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.919284 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521ae3b0-3a63-442a-885c-09a689d344d9-kube-api-access-d4clj" (OuterVolumeSpecName: "kube-api-access-d4clj") pod "521ae3b0-3a63-442a-885c-09a689d344d9" (UID: "521ae3b0-3a63-442a-885c-09a689d344d9"). InnerVolumeSpecName "kube-api-access-d4clj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.945247 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-inventory" (OuterVolumeSpecName: "inventory") pod "521ae3b0-3a63-442a-885c-09a689d344d9" (UID: "521ae3b0-3a63-442a-885c-09a689d344d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:31 crc kubenswrapper[4787]: I0126 19:49:31.976798 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "521ae3b0-3a63-442a-885c-09a689d344d9" (UID: "521ae3b0-3a63-442a-885c-09a689d344d9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.013501 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4clj\" (UniqueName: \"kubernetes.io/projected/521ae3b0-3a63-442a-885c-09a689d344d9-kube-api-access-d4clj\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.013550 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.013566 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.013577 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/521ae3b0-3a63-442a-885c-09a689d344d9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.356913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" event={"ID":"521ae3b0-3a63-442a-885c-09a689d344d9","Type":"ContainerDied","Data":"fa05373c440ccf5f235e9376fe5ff764721a1b19378aa9650187d8dc1586d551"} Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.356985 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa05373c440ccf5f235e9376fe5ff764721a1b19378aa9650187d8dc1586d551" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.357007 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vgzj2" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.458324 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-h7vmn"] Jan 26 19:49:32 crc kubenswrapper[4787]: E0126 19:49:32.459300 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521ae3b0-3a63-442a-885c-09a689d344d9" containerName="reboot-os-openstack-openstack-cell1" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.459330 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="521ae3b0-3a63-442a-885c-09a689d344d9" containerName="reboot-os-openstack-openstack-cell1" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.459931 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="521ae3b0-3a63-442a-885c-09a689d344d9" containerName="reboot-os-openstack-openstack-cell1" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.465982 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.468616 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.471296 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.471747 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.474150 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.477862 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-h7vmn"] Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.524925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.524992 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpjv\" (UniqueName: \"kubernetes.io/projected/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-kube-api-access-bmpjv\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525025 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ceph\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525053 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-inventory\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525312 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525343 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525396 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525441 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525494 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525520 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.525633 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.627881 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.627974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmpjv\" (UniqueName: \"kubernetes.io/projected/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-kube-api-access-bmpjv\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628016 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ceph\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628045 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-inventory\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628152 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628322 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628363 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628447 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628510 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628571 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628598 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.628623 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.634613 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.634681 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.635381 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.635531 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.636429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.639824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ceph\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.640568 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-inventory\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.641336 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.643292 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.643445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.646108 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.648089 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmpjv\" (UniqueName: \"kubernetes.io/projected/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-kube-api-access-bmpjv\") pod \"install-certs-openstack-openstack-cell1-h7vmn\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:32 crc kubenswrapper[4787]: I0126 19:49:32.786862 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:33 crc kubenswrapper[4787]: I0126 19:49:33.334575 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-h7vmn"] Jan 26 19:49:33 crc kubenswrapper[4787]: I0126 19:49:33.367062 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" event={"ID":"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c","Type":"ContainerStarted","Data":"224f5f01cebb7f8da4cebca627def917549b4b18a3e323fb8f506121564d0f9e"} Jan 26 19:49:34 crc kubenswrapper[4787]: I0126 19:49:34.378148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" event={"ID":"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c","Type":"ContainerStarted","Data":"eada114996fe79236b08c449f9873cd1805abf740fbdaf5e37c8e7ce603d1b2f"} Jan 26 19:49:34 crc kubenswrapper[4787]: I0126 19:49:34.399421 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" podStartSLOduration=1.900253983 podStartE2EDuration="2.3994024s" podCreationTimestamp="2026-01-26 19:49:32 +0000 UTC" firstStartedPulling="2026-01-26 19:49:33.339966788 +0000 UTC m=+7542.047102921" lastFinishedPulling="2026-01-26 19:49:33.839115205 +0000 UTC m=+7542.546251338" observedRunningTime="2026-01-26 19:49:34.395783082 +0000 UTC m=+7543.102919225" watchObservedRunningTime="2026-01-26 19:49:34.3994024 +0000 UTC m=+7543.106538533" Jan 26 19:49:37 crc kubenswrapper[4787]: I0126 19:49:37.589846 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:49:37 crc kubenswrapper[4787]: E0126 19:49:37.590634 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:49:49 crc kubenswrapper[4787]: I0126 19:49:49.589911 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:49:49 crc kubenswrapper[4787]: E0126 19:49:49.590656 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:49:53 crc kubenswrapper[4787]: I0126 19:49:53.555349 4787 generic.go:334] "Generic (PLEG): container finished" podID="bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" containerID="eada114996fe79236b08c449f9873cd1805abf740fbdaf5e37c8e7ce603d1b2f" exitCode=0 Jan 26 19:49:53 crc kubenswrapper[4787]: I0126 19:49:53.555437 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" event={"ID":"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c","Type":"ContainerDied","Data":"eada114996fe79236b08c449f9873cd1805abf740fbdaf5e37c8e7ce603d1b2f"} Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.089074 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.241971 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-metadata-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242119 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmpjv\" (UniqueName: \"kubernetes.io/projected/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-kube-api-access-bmpjv\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242167 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-bootstrap-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242273 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-dhcp-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242323 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-nova-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242398 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-inventory\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242447 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-sriov-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242511 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ceph\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242554 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-telemetry-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242595 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ovn-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242654 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ssh-key-openstack-cell1\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.242723 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-libvirt-combined-ca-bundle\") pod \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\" (UID: \"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c\") " Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.249529 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.251707 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.252179 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.256156 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.260734 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.260865 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.261062 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.261896 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-kube-api-access-bmpjv" (OuterVolumeSpecName: "kube-api-access-bmpjv") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "kube-api-access-bmpjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.265161 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ceph" (OuterVolumeSpecName: "ceph") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.268731 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.283889 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-inventory" (OuterVolumeSpecName: "inventory") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.310704 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" (UID: "bd66a7e4-b67a-4d04-ac6d-b0640cdc592c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345426 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmpjv\" (UniqueName: \"kubernetes.io/projected/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-kube-api-access-bmpjv\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345479 4787 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345490 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345502 4787 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345516 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345527 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345541 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345554 4787 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345566 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345576 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345587 4787 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.345597 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd66a7e4-b67a-4d04-ac6d-b0640cdc592c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.574703 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" event={"ID":"bd66a7e4-b67a-4d04-ac6d-b0640cdc592c","Type":"ContainerDied","Data":"224f5f01cebb7f8da4cebca627def917549b4b18a3e323fb8f506121564d0f9e"} Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.574742 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-h7vmn" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.574766 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="224f5f01cebb7f8da4cebca627def917549b4b18a3e323fb8f506121564d0f9e" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.718508 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-tvgbs"] Jan 26 19:49:55 crc kubenswrapper[4787]: E0126 19:49:55.719271 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" containerName="install-certs-openstack-openstack-cell1" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.719295 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" containerName="install-certs-openstack-openstack-cell1" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.719538 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd66a7e4-b67a-4d04-ac6d-b0640cdc592c" containerName="install-certs-openstack-openstack-cell1" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.720567 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.724392 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.724873 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.725056 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.725230 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.738179 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-tvgbs"] Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.887123 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ceph\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.887209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.887282 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-inventory\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.887490 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4hmk\" (UniqueName: \"kubernetes.io/projected/cd027093-fbd5-4f2a-897d-cdc67a88f7be-kube-api-access-l4hmk\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.989134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.989443 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-inventory\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.989492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4hmk\" (UniqueName: \"kubernetes.io/projected/cd027093-fbd5-4f2a-897d-cdc67a88f7be-kube-api-access-l4hmk\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.989626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ceph\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:55 crc kubenswrapper[4787]: I0126 19:49:55.994414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ceph\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:56 crc kubenswrapper[4787]: I0126 19:49:56.003720 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:56 crc kubenswrapper[4787]: I0126 19:49:56.005253 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-inventory\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:56 crc kubenswrapper[4787]: I0126 19:49:56.009627 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4hmk\" (UniqueName: \"kubernetes.io/projected/cd027093-fbd5-4f2a-897d-cdc67a88f7be-kube-api-access-l4hmk\") pod \"ceph-client-openstack-openstack-cell1-tvgbs\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:56 crc kubenswrapper[4787]: I0126 19:49:56.037686 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:49:56 crc kubenswrapper[4787]: I0126 19:49:56.624521 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-tvgbs"] Jan 26 19:49:56 crc kubenswrapper[4787]: W0126 19:49:56.627461 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd027093_fbd5_4f2a_897d_cdc67a88f7be.slice/crio-e93b25b6e04189976c1b59e3fb468d99ce731a263163b8bc0ac5e29e5809e1be WatchSource:0}: Error finding container e93b25b6e04189976c1b59e3fb468d99ce731a263163b8bc0ac5e29e5809e1be: Status 404 returned error can't find the container with id e93b25b6e04189976c1b59e3fb468d99ce731a263163b8bc0ac5e29e5809e1be Jan 26 19:49:57 crc kubenswrapper[4787]: I0126 19:49:57.603006 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" event={"ID":"cd027093-fbd5-4f2a-897d-cdc67a88f7be","Type":"ContainerStarted","Data":"9158c060a61ec53f8e6e2664a7c0464be97451d21ee352f0754813f5b26595eb"} Jan 26 19:49:57 crc kubenswrapper[4787]: I0126 19:49:57.603326 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" event={"ID":"cd027093-fbd5-4f2a-897d-cdc67a88f7be","Type":"ContainerStarted","Data":"e93b25b6e04189976c1b59e3fb468d99ce731a263163b8bc0ac5e29e5809e1be"} Jan 26 19:49:57 crc kubenswrapper[4787]: I0126 19:49:57.618468 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" podStartSLOduration=1.9556586839999999 podStartE2EDuration="2.61844542s" podCreationTimestamp="2026-01-26 19:49:55 +0000 UTC" firstStartedPulling="2026-01-26 19:49:56.630433656 +0000 UTC m=+7565.337569789" lastFinishedPulling="2026-01-26 19:49:57.293220392 +0000 UTC m=+7566.000356525" observedRunningTime="2026-01-26 19:49:57.6090915 +0000 UTC m=+7566.316227633" watchObservedRunningTime="2026-01-26 19:49:57.61844542 +0000 UTC m=+7566.325581553" Jan 26 19:50:01 crc kubenswrapper[4787]: I0126 19:50:01.597043 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:50:01 crc kubenswrapper[4787]: E0126 19:50:01.597766 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:50:02 crc kubenswrapper[4787]: I0126 19:50:02.648942 4787 generic.go:334] "Generic (PLEG): container finished" podID="cd027093-fbd5-4f2a-897d-cdc67a88f7be" containerID="9158c060a61ec53f8e6e2664a7c0464be97451d21ee352f0754813f5b26595eb" exitCode=0 Jan 26 19:50:02 crc kubenswrapper[4787]: I0126 19:50:02.648991 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" event={"ID":"cd027093-fbd5-4f2a-897d-cdc67a88f7be","Type":"ContainerDied","Data":"9158c060a61ec53f8e6e2664a7c0464be97451d21ee352f0754813f5b26595eb"} Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.186330 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.367274 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ssh-key-openstack-cell1\") pod \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.367443 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-inventory\") pod \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.367546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ceph\") pod \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.367567 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4hmk\" (UniqueName: \"kubernetes.io/projected/cd027093-fbd5-4f2a-897d-cdc67a88f7be-kube-api-access-l4hmk\") pod \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\" (UID: \"cd027093-fbd5-4f2a-897d-cdc67a88f7be\") " Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.373810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ceph" (OuterVolumeSpecName: "ceph") pod "cd027093-fbd5-4f2a-897d-cdc67a88f7be" (UID: "cd027093-fbd5-4f2a-897d-cdc67a88f7be"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.385286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd027093-fbd5-4f2a-897d-cdc67a88f7be-kube-api-access-l4hmk" (OuterVolumeSpecName: "kube-api-access-l4hmk") pod "cd027093-fbd5-4f2a-897d-cdc67a88f7be" (UID: "cd027093-fbd5-4f2a-897d-cdc67a88f7be"). InnerVolumeSpecName "kube-api-access-l4hmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.410273 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "cd027093-fbd5-4f2a-897d-cdc67a88f7be" (UID: "cd027093-fbd5-4f2a-897d-cdc67a88f7be"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.416083 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-inventory" (OuterVolumeSpecName: "inventory") pod "cd027093-fbd5-4f2a-897d-cdc67a88f7be" (UID: "cd027093-fbd5-4f2a-897d-cdc67a88f7be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.470581 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.470623 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.470633 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd027093-fbd5-4f2a-897d-cdc67a88f7be-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.470642 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4hmk\" (UniqueName: \"kubernetes.io/projected/cd027093-fbd5-4f2a-897d-cdc67a88f7be-kube-api-access-l4hmk\") on node \"crc\" DevicePath \"\"" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.669376 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" event={"ID":"cd027093-fbd5-4f2a-897d-cdc67a88f7be","Type":"ContainerDied","Data":"e93b25b6e04189976c1b59e3fb468d99ce731a263163b8bc0ac5e29e5809e1be"} Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.669744 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93b25b6e04189976c1b59e3fb468d99ce731a263163b8bc0ac5e29e5809e1be" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.669467 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-tvgbs" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.760285 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-2sq4b"] Jan 26 19:50:04 crc kubenswrapper[4787]: E0126 19:50:04.761072 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd027093-fbd5-4f2a-897d-cdc67a88f7be" containerName="ceph-client-openstack-openstack-cell1" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.761090 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd027093-fbd5-4f2a-897d-cdc67a88f7be" containerName="ceph-client-openstack-openstack-cell1" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.761569 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd027093-fbd5-4f2a-897d-cdc67a88f7be" containerName="ceph-client-openstack-openstack-cell1" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.762868 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.765838 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.766277 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.766484 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.766653 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.767860 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.771259 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-2sq4b"] Jan 26 19:50:04 crc kubenswrapper[4787]: E0126 19:50:04.807926 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd027093_fbd5_4f2a_897d_cdc67a88f7be.slice\": RecentStats: unable to find data in memory cache]" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.878007 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ceph\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.878142 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-inventory\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.878213 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20232ae2-355a-48d9-87f8-8132caa1fff6-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.878242 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.878310 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7fg\" (UniqueName: \"kubernetes.io/projected/20232ae2-355a-48d9-87f8-8132caa1fff6-kube-api-access-4d7fg\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.878340 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.979970 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ceph\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.980026 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-inventory\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.980082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20232ae2-355a-48d9-87f8-8132caa1fff6-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.980106 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.980136 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7fg\" (UniqueName: \"kubernetes.io/projected/20232ae2-355a-48d9-87f8-8132caa1fff6-kube-api-access-4d7fg\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.980218 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.981752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20232ae2-355a-48d9-87f8-8132caa1fff6-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.986685 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.988601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ceph\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.989414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-inventory\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:04 crc kubenswrapper[4787]: I0126 19:50:04.996532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:05 crc kubenswrapper[4787]: I0126 19:50:05.005520 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7fg\" (UniqueName: \"kubernetes.io/projected/20232ae2-355a-48d9-87f8-8132caa1fff6-kube-api-access-4d7fg\") pod \"ovn-openstack-openstack-cell1-2sq4b\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:05 crc kubenswrapper[4787]: I0126 19:50:05.086547 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:50:05 crc kubenswrapper[4787]: I0126 19:50:05.643467 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-2sq4b"] Jan 26 19:50:05 crc kubenswrapper[4787]: I0126 19:50:05.681622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" event={"ID":"20232ae2-355a-48d9-87f8-8132caa1fff6","Type":"ContainerStarted","Data":"88e1671d6b2e92bb0c76dffc86ab27cbe663d72d5561dc217897d5819b712468"} Jan 26 19:50:06 crc kubenswrapper[4787]: I0126 19:50:06.690415 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" event={"ID":"20232ae2-355a-48d9-87f8-8132caa1fff6","Type":"ContainerStarted","Data":"8c8d85f7349696ffeb3447bf19c0e47dbf515805a76dd72ba43d627aad12a40b"} Jan 26 19:50:06 crc kubenswrapper[4787]: I0126 19:50:06.713596 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" podStartSLOduration=2.310859095 podStartE2EDuration="2.713581319s" podCreationTimestamp="2026-01-26 19:50:04 +0000 UTC" firstStartedPulling="2026-01-26 19:50:05.652774534 +0000 UTC m=+7574.359910667" lastFinishedPulling="2026-01-26 19:50:06.055496758 +0000 UTC m=+7574.762632891" observedRunningTime="2026-01-26 19:50:06.708802492 +0000 UTC m=+7575.415938625" watchObservedRunningTime="2026-01-26 19:50:06.713581319 +0000 UTC m=+7575.420717452" Jan 26 19:50:13 crc kubenswrapper[4787]: I0126 19:50:13.589786 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:50:13 crc kubenswrapper[4787]: E0126 19:50:13.590602 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:50:25 crc kubenswrapper[4787]: I0126 19:50:25.590460 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:50:25 crc kubenswrapper[4787]: E0126 19:50:25.591281 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:50:38 crc kubenswrapper[4787]: I0126 19:50:38.590066 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:50:38 crc kubenswrapper[4787]: E0126 19:50:38.591094 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:50:53 crc kubenswrapper[4787]: I0126 19:50:53.589212 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:50:53 crc kubenswrapper[4787]: E0126 19:50:53.590093 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:51:04 crc kubenswrapper[4787]: I0126 19:51:04.589781 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:51:04 crc kubenswrapper[4787]: E0126 19:51:04.590664 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:51:07 crc kubenswrapper[4787]: I0126 19:51:07.265549 4787 generic.go:334] "Generic (PLEG): container finished" podID="20232ae2-355a-48d9-87f8-8132caa1fff6" containerID="8c8d85f7349696ffeb3447bf19c0e47dbf515805a76dd72ba43d627aad12a40b" exitCode=0 Jan 26 19:51:07 crc kubenswrapper[4787]: I0126 19:51:07.265640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" event={"ID":"20232ae2-355a-48d9-87f8-8132caa1fff6","Type":"ContainerDied","Data":"8c8d85f7349696ffeb3447bf19c0e47dbf515805a76dd72ba43d627aad12a40b"} Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.728493 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.835081 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ovn-combined-ca-bundle\") pod \"20232ae2-355a-48d9-87f8-8132caa1fff6\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.835224 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20232ae2-355a-48d9-87f8-8132caa1fff6-ovncontroller-config-0\") pod \"20232ae2-355a-48d9-87f8-8132caa1fff6\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.835386 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7fg\" (UniqueName: \"kubernetes.io/projected/20232ae2-355a-48d9-87f8-8132caa1fff6-kube-api-access-4d7fg\") pod \"20232ae2-355a-48d9-87f8-8132caa1fff6\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.835494 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-inventory\") pod \"20232ae2-355a-48d9-87f8-8132caa1fff6\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.835540 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ceph\") pod \"20232ae2-355a-48d9-87f8-8132caa1fff6\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.835598 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ssh-key-openstack-cell1\") pod \"20232ae2-355a-48d9-87f8-8132caa1fff6\" (UID: \"20232ae2-355a-48d9-87f8-8132caa1fff6\") " Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.840665 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20232ae2-355a-48d9-87f8-8132caa1fff6-kube-api-access-4d7fg" (OuterVolumeSpecName: "kube-api-access-4d7fg") pod "20232ae2-355a-48d9-87f8-8132caa1fff6" (UID: "20232ae2-355a-48d9-87f8-8132caa1fff6"). InnerVolumeSpecName "kube-api-access-4d7fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.841333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ceph" (OuterVolumeSpecName: "ceph") pod "20232ae2-355a-48d9-87f8-8132caa1fff6" (UID: "20232ae2-355a-48d9-87f8-8132caa1fff6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.857150 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "20232ae2-355a-48d9-87f8-8132caa1fff6" (UID: "20232ae2-355a-48d9-87f8-8132caa1fff6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.864824 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20232ae2-355a-48d9-87f8-8132caa1fff6-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "20232ae2-355a-48d9-87f8-8132caa1fff6" (UID: "20232ae2-355a-48d9-87f8-8132caa1fff6"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.866673 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-inventory" (OuterVolumeSpecName: "inventory") pod "20232ae2-355a-48d9-87f8-8132caa1fff6" (UID: "20232ae2-355a-48d9-87f8-8132caa1fff6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.872404 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "20232ae2-355a-48d9-87f8-8132caa1fff6" (UID: "20232ae2-355a-48d9-87f8-8132caa1fff6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.971318 4787 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20232ae2-355a-48d9-87f8-8132caa1fff6-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.971358 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7fg\" (UniqueName: \"kubernetes.io/projected/20232ae2-355a-48d9-87f8-8132caa1fff6-kube-api-access-4d7fg\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.971370 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.971380 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.971396 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:08 crc kubenswrapper[4787]: I0126 19:51:08.971407 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20232ae2-355a-48d9-87f8-8132caa1fff6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.305263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" event={"ID":"20232ae2-355a-48d9-87f8-8132caa1fff6","Type":"ContainerDied","Data":"88e1671d6b2e92bb0c76dffc86ab27cbe663d72d5561dc217897d5819b712468"} Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.305320 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88e1671d6b2e92bb0c76dffc86ab27cbe663d72d5561dc217897d5819b712468" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.305483 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-2sq4b" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.389590 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-chcs8"] Jan 26 19:51:09 crc kubenswrapper[4787]: E0126 19:51:09.390054 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20232ae2-355a-48d9-87f8-8132caa1fff6" containerName="ovn-openstack-openstack-cell1" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.390073 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20232ae2-355a-48d9-87f8-8132caa1fff6" containerName="ovn-openstack-openstack-cell1" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.390325 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20232ae2-355a-48d9-87f8-8132caa1fff6" containerName="ovn-openstack-openstack-cell1" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.391806 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.395080 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.398505 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.398675 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.398774 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.398867 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.399094 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.425968 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-chcs8"] Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482112 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9bz8\" (UniqueName: \"kubernetes.io/projected/6cbdba7d-a110-4446-a5c1-071da22f49fd-kube-api-access-d9bz8\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482222 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482320 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.482365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.583597 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.583695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.583736 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9bz8\" (UniqueName: \"kubernetes.io/projected/6cbdba7d-a110-4446-a5c1-071da22f49fd-kube-api-access-d9bz8\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.583772 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.583831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.583972 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.584027 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.587774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.587834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.588899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.588999 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.593707 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.594844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.607099 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9bz8\" (UniqueName: \"kubernetes.io/projected/6cbdba7d-a110-4446-a5c1-071da22f49fd-kube-api-access-d9bz8\") pod \"neutron-metadata-openstack-openstack-cell1-chcs8\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:09 crc kubenswrapper[4787]: I0126 19:51:09.718020 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:51:10 crc kubenswrapper[4787]: I0126 19:51:10.303490 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-chcs8"] Jan 26 19:51:10 crc kubenswrapper[4787]: W0126 19:51:10.316604 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cbdba7d_a110_4446_a5c1_071da22f49fd.slice/crio-93cfdec9414957ff99bbef76aae1118ecb978bc14bd4ca1763d9a19e5d7abc34 WatchSource:0}: Error finding container 93cfdec9414957ff99bbef76aae1118ecb978bc14bd4ca1763d9a19e5d7abc34: Status 404 returned error can't find the container with id 93cfdec9414957ff99bbef76aae1118ecb978bc14bd4ca1763d9a19e5d7abc34 Jan 26 19:51:11 crc kubenswrapper[4787]: I0126 19:51:11.345424 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" event={"ID":"6cbdba7d-a110-4446-a5c1-071da22f49fd","Type":"ContainerStarted","Data":"a8041fd562758bab90ed2a16535248843db68df4b70fa54030290b2c85a16f9f"} Jan 26 19:51:11 crc kubenswrapper[4787]: I0126 19:51:11.350310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" event={"ID":"6cbdba7d-a110-4446-a5c1-071da22f49fd","Type":"ContainerStarted","Data":"93cfdec9414957ff99bbef76aae1118ecb978bc14bd4ca1763d9a19e5d7abc34"} Jan 26 19:51:11 crc kubenswrapper[4787]: I0126 19:51:11.376287 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" podStartSLOduration=1.753094264 podStartE2EDuration="2.376265219s" podCreationTimestamp="2026-01-26 19:51:09 +0000 UTC" firstStartedPulling="2026-01-26 19:51:10.3222293 +0000 UTC m=+7639.029365443" lastFinishedPulling="2026-01-26 19:51:10.945400265 +0000 UTC m=+7639.652536398" observedRunningTime="2026-01-26 19:51:11.370480988 +0000 UTC m=+7640.077617111" watchObservedRunningTime="2026-01-26 19:51:11.376265219 +0000 UTC m=+7640.083401352" Jan 26 19:51:19 crc kubenswrapper[4787]: I0126 19:51:19.589699 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:51:19 crc kubenswrapper[4787]: E0126 19:51:19.590533 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:51:32 crc kubenswrapper[4787]: I0126 19:51:32.589279 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:51:32 crc kubenswrapper[4787]: E0126 19:51:32.590085 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:51:45 crc kubenswrapper[4787]: I0126 19:51:45.589994 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:51:45 crc kubenswrapper[4787]: E0126 19:51:45.591513 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:51:56 crc kubenswrapper[4787]: I0126 19:51:56.589111 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:51:56 crc kubenswrapper[4787]: E0126 19:51:56.589991 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:52:00 crc kubenswrapper[4787]: I0126 19:52:00.810388 4787 generic.go:334] "Generic (PLEG): container finished" podID="6cbdba7d-a110-4446-a5c1-071da22f49fd" containerID="a8041fd562758bab90ed2a16535248843db68df4b70fa54030290b2c85a16f9f" exitCode=0 Jan 26 19:52:00 crc kubenswrapper[4787]: I0126 19:52:00.810533 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" event={"ID":"6cbdba7d-a110-4446-a5c1-071da22f49fd","Type":"ContainerDied","Data":"a8041fd562758bab90ed2a16535248843db68df4b70fa54030290b2c85a16f9f"} Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.338061 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.460621 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-nova-metadata-neutron-config-0\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.460880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9bz8\" (UniqueName: \"kubernetes.io/projected/6cbdba7d-a110-4446-a5c1-071da22f49fd-kube-api-access-d9bz8\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.460909 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ceph\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.461094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-inventory\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.461122 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ssh-key-openstack-cell1\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.461148 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.461271 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-metadata-combined-ca-bundle\") pod \"6cbdba7d-a110-4446-a5c1-071da22f49fd\" (UID: \"6cbdba7d-a110-4446-a5c1-071da22f49fd\") " Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.466593 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.467774 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbdba7d-a110-4446-a5c1-071da22f49fd-kube-api-access-d9bz8" (OuterVolumeSpecName: "kube-api-access-d9bz8") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "kube-api-access-d9bz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.467787 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ceph" (OuterVolumeSpecName: "ceph") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.489563 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.490755 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-inventory" (OuterVolumeSpecName: "inventory") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.494294 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.494651 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6cbdba7d-a110-4446-a5c1-071da22f49fd" (UID: "6cbdba7d-a110-4446-a5c1-071da22f49fd"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.563897 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.563947 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.563979 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.563996 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.564010 4787 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.564021 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9bz8\" (UniqueName: \"kubernetes.io/projected/6cbdba7d-a110-4446-a5c1-071da22f49fd-kube-api-access-d9bz8\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.564031 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbdba7d-a110-4446-a5c1-071da22f49fd-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.831744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" event={"ID":"6cbdba7d-a110-4446-a5c1-071da22f49fd","Type":"ContainerDied","Data":"93cfdec9414957ff99bbef76aae1118ecb978bc14bd4ca1763d9a19e5d7abc34"} Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.831805 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93cfdec9414957ff99bbef76aae1118ecb978bc14bd4ca1763d9a19e5d7abc34" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.831819 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-chcs8" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.930902 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-r4h94"] Jan 26 19:52:02 crc kubenswrapper[4787]: E0126 19:52:02.931552 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbdba7d-a110-4446-a5c1-071da22f49fd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.931579 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbdba7d-a110-4446-a5c1-071da22f49fd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.931860 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbdba7d-a110-4446-a5c1-071da22f49fd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.932849 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.938638 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.938940 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.939123 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.939282 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.940342 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:52:02 crc kubenswrapper[4787]: I0126 19:52:02.944662 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-r4h94"] Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.074375 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdnv\" (UniqueName: \"kubernetes.io/projected/d06cd16e-f936-4443-8d39-d23a3f5a3a99-kube-api-access-pwdnv\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.074607 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.074693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ceph\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.074743 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.074824 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-inventory\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.075247 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.177743 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.178161 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwdnv\" (UniqueName: \"kubernetes.io/projected/d06cd16e-f936-4443-8d39-d23a3f5a3a99-kube-api-access-pwdnv\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.178228 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.178255 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ceph\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.178284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.178331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-inventory\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.183302 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.183854 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ceph\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.183871 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-inventory\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.184635 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.191513 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.196285 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwdnv\" (UniqueName: \"kubernetes.io/projected/d06cd16e-f936-4443-8d39-d23a3f5a3a99-kube-api-access-pwdnv\") pod \"libvirt-openstack-openstack-cell1-r4h94\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.251809 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.792601 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.799971 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-r4h94"] Jan 26 19:52:03 crc kubenswrapper[4787]: I0126 19:52:03.844495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" event={"ID":"d06cd16e-f936-4443-8d39-d23a3f5a3a99","Type":"ContainerStarted","Data":"f6d06c9535ab4a7a9ed2e931308f63d13b819e1d99cea567fa459a447485a3b7"} Jan 26 19:52:04 crc kubenswrapper[4787]: I0126 19:52:04.854574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" event={"ID":"d06cd16e-f936-4443-8d39-d23a3f5a3a99","Type":"ContainerStarted","Data":"04ed000777f48975ef3a96df37fd4fdc78cfc9acc82a87d1fdfc0171f25440af"} Jan 26 19:52:04 crc kubenswrapper[4787]: I0126 19:52:04.871404 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" podStartSLOduration=2.28569768 podStartE2EDuration="2.871387228s" podCreationTimestamp="2026-01-26 19:52:02 +0000 UTC" firstStartedPulling="2026-01-26 19:52:03.792405167 +0000 UTC m=+7692.499541300" lastFinishedPulling="2026-01-26 19:52:04.378094715 +0000 UTC m=+7693.085230848" observedRunningTime="2026-01-26 19:52:04.869333998 +0000 UTC m=+7693.576470141" watchObservedRunningTime="2026-01-26 19:52:04.871387228 +0000 UTC m=+7693.578523361" Jan 26 19:52:08 crc kubenswrapper[4787]: I0126 19:52:08.589311 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:52:08 crc kubenswrapper[4787]: E0126 19:52:08.590153 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:52:22 crc kubenswrapper[4787]: I0126 19:52:22.592735 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:52:22 crc kubenswrapper[4787]: E0126 19:52:22.593808 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:52:37 crc kubenswrapper[4787]: I0126 19:52:37.590064 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:52:37 crc kubenswrapper[4787]: E0126 19:52:37.592428 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:52:51 crc kubenswrapper[4787]: I0126 19:52:51.632438 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:52:52 crc kubenswrapper[4787]: I0126 19:52:52.304737 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"d010f5f239def42b6ea0f33cf19bd1310f287ce5db4c3500db50792cb479b5e1"} Jan 26 19:55:16 crc kubenswrapper[4787]: I0126 19:55:16.808236 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:55:16 crc kubenswrapper[4787]: I0126 19:55:16.810520 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:55:46 crc kubenswrapper[4787]: I0126 19:55:46.807831 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:55:46 crc kubenswrapper[4787]: I0126 19:55:46.808436 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:56:16 crc kubenswrapper[4787]: I0126 19:56:16.808477 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:56:16 crc kubenswrapper[4787]: I0126 19:56:16.809427 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:56:16 crc kubenswrapper[4787]: I0126 19:56:16.809518 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:56:16 crc kubenswrapper[4787]: I0126 19:56:16.811381 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d010f5f239def42b6ea0f33cf19bd1310f287ce5db4c3500db50792cb479b5e1"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:56:16 crc kubenswrapper[4787]: I0126 19:56:16.811871 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://d010f5f239def42b6ea0f33cf19bd1310f287ce5db4c3500db50792cb479b5e1" gracePeriod=600 Jan 26 19:56:17 crc kubenswrapper[4787]: I0126 19:56:17.920254 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="d010f5f239def42b6ea0f33cf19bd1310f287ce5db4c3500db50792cb479b5e1" exitCode=0 Jan 26 19:56:17 crc kubenswrapper[4787]: I0126 19:56:17.920359 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"d010f5f239def42b6ea0f33cf19bd1310f287ce5db4c3500db50792cb479b5e1"} Jan 26 19:56:17 crc kubenswrapper[4787]: I0126 19:56:17.920833 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d"} Jan 26 19:56:17 crc kubenswrapper[4787]: I0126 19:56:17.920867 4787 scope.go:117] "RemoveContainer" containerID="33d803ebb22bd18ac6276ec37f52f1fae8bb553e6585c06dfcf15b05c45f6920" Jan 26 19:56:35 crc kubenswrapper[4787]: I0126 19:56:35.133824 4787 generic.go:334] "Generic (PLEG): container finished" podID="d06cd16e-f936-4443-8d39-d23a3f5a3a99" containerID="04ed000777f48975ef3a96df37fd4fdc78cfc9acc82a87d1fdfc0171f25440af" exitCode=0 Jan 26 19:56:35 crc kubenswrapper[4787]: I0126 19:56:35.133997 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" event={"ID":"d06cd16e-f936-4443-8d39-d23a3f5a3a99","Type":"ContainerDied","Data":"04ed000777f48975ef3a96df37fd4fdc78cfc9acc82a87d1fdfc0171f25440af"} Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.607064 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.735300 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-inventory\") pod \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.735379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ssh-key-openstack-cell1\") pod \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.735464 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwdnv\" (UniqueName: \"kubernetes.io/projected/d06cd16e-f936-4443-8d39-d23a3f5a3a99-kube-api-access-pwdnv\") pod \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.735479 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ceph\") pod \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.735496 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-secret-0\") pod \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.735609 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-combined-ca-bundle\") pod \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\" (UID: \"d06cd16e-f936-4443-8d39-d23a3f5a3a99\") " Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.741685 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06cd16e-f936-4443-8d39-d23a3f5a3a99-kube-api-access-pwdnv" (OuterVolumeSpecName: "kube-api-access-pwdnv") pod "d06cd16e-f936-4443-8d39-d23a3f5a3a99" (UID: "d06cd16e-f936-4443-8d39-d23a3f5a3a99"). InnerVolumeSpecName "kube-api-access-pwdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.742084 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d06cd16e-f936-4443-8d39-d23a3f5a3a99" (UID: "d06cd16e-f936-4443-8d39-d23a3f5a3a99"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.742813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ceph" (OuterVolumeSpecName: "ceph") pod "d06cd16e-f936-4443-8d39-d23a3f5a3a99" (UID: "d06cd16e-f936-4443-8d39-d23a3f5a3a99"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.766146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d06cd16e-f936-4443-8d39-d23a3f5a3a99" (UID: "d06cd16e-f936-4443-8d39-d23a3f5a3a99"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.787389 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-inventory" (OuterVolumeSpecName: "inventory") pod "d06cd16e-f936-4443-8d39-d23a3f5a3a99" (UID: "d06cd16e-f936-4443-8d39-d23a3f5a3a99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.799302 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d06cd16e-f936-4443-8d39-d23a3f5a3a99" (UID: "d06cd16e-f936-4443-8d39-d23a3f5a3a99"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.837862 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwdnv\" (UniqueName: \"kubernetes.io/projected/d06cd16e-f936-4443-8d39-d23a3f5a3a99-kube-api-access-pwdnv\") on node \"crc\" DevicePath \"\"" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.838135 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.838157 4787 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.838171 4787 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.838183 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:56:36 crc kubenswrapper[4787]: I0126 19:56:36.838196 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d06cd16e-f936-4443-8d39-d23a3f5a3a99-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.160346 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" event={"ID":"d06cd16e-f936-4443-8d39-d23a3f5a3a99","Type":"ContainerDied","Data":"f6d06c9535ab4a7a9ed2e931308f63d13b819e1d99cea567fa459a447485a3b7"} Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.160893 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d06c9535ab4a7a9ed2e931308f63d13b819e1d99cea567fa459a447485a3b7" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.160394 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-r4h94" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.275274 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-hpfbg"] Jan 26 19:56:37 crc kubenswrapper[4787]: E0126 19:56:37.275807 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06cd16e-f936-4443-8d39-d23a3f5a3a99" containerName="libvirt-openstack-openstack-cell1" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.275832 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06cd16e-f936-4443-8d39-d23a3f5a3a99" containerName="libvirt-openstack-openstack-cell1" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.276277 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06cd16e-f936-4443-8d39-d23a3f5a3a99" containerName="libvirt-openstack-openstack-cell1" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.277217 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.284795 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.285072 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.285182 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.285277 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.286253 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.286463 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.287418 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-hpfbg"] Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.293571 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348520 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348570 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348769 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348817 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-inventory\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348866 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ceph\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.348995 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.349065 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.349101 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.349214 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.349325 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpjv\" (UniqueName: \"kubernetes.io/projected/83413341-3b4e-483b-9e6e-af2c64428fb1-kube-api-access-mvpjv\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.451941 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-inventory\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452105 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452203 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ceph\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452237 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452385 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452469 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvpjv\" (UniqueName: \"kubernetes.io/projected/83413341-3b4e-483b-9e6e-af2c64428fb1-kube-api-access-mvpjv\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.452696 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.453864 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.454837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.458422 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.458539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-inventory\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.458760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.459124 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.460714 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ceph\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.460850 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.464435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.464555 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.474940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvpjv\" (UniqueName: \"kubernetes.io/projected/83413341-3b4e-483b-9e6e-af2c64428fb1-kube-api-access-mvpjv\") pod \"nova-cell1-openstack-openstack-cell1-hpfbg\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:37 crc kubenswrapper[4787]: I0126 19:56:37.602352 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:56:38 crc kubenswrapper[4787]: I0126 19:56:38.020912 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-hpfbg"] Jan 26 19:56:38 crc kubenswrapper[4787]: W0126 19:56:38.027790 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83413341_3b4e_483b_9e6e_af2c64428fb1.slice/crio-936e2776b998d2ba154d3c5a8ce51de837046df7f8f17148225f39ef9821ae35 WatchSource:0}: Error finding container 936e2776b998d2ba154d3c5a8ce51de837046df7f8f17148225f39ef9821ae35: Status 404 returned error can't find the container with id 936e2776b998d2ba154d3c5a8ce51de837046df7f8f17148225f39ef9821ae35 Jan 26 19:56:38 crc kubenswrapper[4787]: I0126 19:56:38.172412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" event={"ID":"83413341-3b4e-483b-9e6e-af2c64428fb1","Type":"ContainerStarted","Data":"936e2776b998d2ba154d3c5a8ce51de837046df7f8f17148225f39ef9821ae35"} Jan 26 19:56:39 crc kubenswrapper[4787]: I0126 19:56:39.183088 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" event={"ID":"83413341-3b4e-483b-9e6e-af2c64428fb1","Type":"ContainerStarted","Data":"0fe0fa3d5b79f53f04687b8e2dde8f007172c2482b8f46459f1c179fb1b479d4"} Jan 26 19:56:39 crc kubenswrapper[4787]: I0126 19:56:39.214250 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" podStartSLOduration=1.7804935419999999 podStartE2EDuration="2.214225797s" podCreationTimestamp="2026-01-26 19:56:37 +0000 UTC" firstStartedPulling="2026-01-26 19:56:38.032403467 +0000 UTC m=+7966.739539610" lastFinishedPulling="2026-01-26 19:56:38.466135712 +0000 UTC m=+7967.173271865" observedRunningTime="2026-01-26 19:56:39.210598518 +0000 UTC m=+7967.917734691" watchObservedRunningTime="2026-01-26 19:56:39.214225797 +0000 UTC m=+7967.921361950" Jan 26 19:56:52 crc kubenswrapper[4787]: I0126 19:56:52.878034 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpw8j"] Jan 26 19:56:52 crc kubenswrapper[4787]: I0126 19:56:52.888146 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:52 crc kubenswrapper[4787]: I0126 19:56:52.931321 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpw8j"] Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.045267 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd9fq\" (UniqueName: \"kubernetes.io/projected/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-kube-api-access-cd9fq\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.046272 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-catalog-content\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.046677 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-utilities\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.149108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-utilities\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.149478 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd9fq\" (UniqueName: \"kubernetes.io/projected/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-kube-api-access-cd9fq\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.149955 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-catalog-content\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.149651 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-utilities\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.150315 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-catalog-content\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.167691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd9fq\" (UniqueName: \"kubernetes.io/projected/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-kube-api-access-cd9fq\") pod \"redhat-operators-vpw8j\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.213672 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:56:53 crc kubenswrapper[4787]: I0126 19:56:53.681486 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpw8j"] Jan 26 19:56:54 crc kubenswrapper[4787]: I0126 19:56:54.365290 4787 generic.go:334] "Generic (PLEG): container finished" podID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerID="80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470" exitCode=0 Jan 26 19:56:54 crc kubenswrapper[4787]: I0126 19:56:54.365368 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerDied","Data":"80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470"} Jan 26 19:56:54 crc kubenswrapper[4787]: I0126 19:56:54.365645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerStarted","Data":"dec5aa3d9ffa471b5d52dfcd1d94779ea1da457030c1551f588aabe9f773f777"} Jan 26 19:56:55 crc kubenswrapper[4787]: I0126 19:56:55.382532 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerStarted","Data":"4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee"} Jan 26 19:56:59 crc kubenswrapper[4787]: I0126 19:56:59.420790 4787 generic.go:334] "Generic (PLEG): container finished" podID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerID="4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee" exitCode=0 Jan 26 19:56:59 crc kubenswrapper[4787]: I0126 19:56:59.420874 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerDied","Data":"4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee"} Jan 26 19:57:00 crc kubenswrapper[4787]: I0126 19:57:00.436309 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerStarted","Data":"0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b"} Jan 26 19:57:00 crc kubenswrapper[4787]: I0126 19:57:00.470998 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpw8j" podStartSLOduration=2.748007144 podStartE2EDuration="8.470939666s" podCreationTimestamp="2026-01-26 19:56:52 +0000 UTC" firstStartedPulling="2026-01-26 19:56:54.367868491 +0000 UTC m=+7983.075004624" lastFinishedPulling="2026-01-26 19:57:00.090801013 +0000 UTC m=+7988.797937146" observedRunningTime="2026-01-26 19:57:00.460175412 +0000 UTC m=+7989.167311585" watchObservedRunningTime="2026-01-26 19:57:00.470939666 +0000 UTC m=+7989.178075799" Jan 26 19:57:03 crc kubenswrapper[4787]: I0126 19:57:03.214129 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:57:03 crc kubenswrapper[4787]: I0126 19:57:03.215096 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:57:04 crc kubenswrapper[4787]: I0126 19:57:04.283513 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpw8j" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="registry-server" probeResult="failure" output=< Jan 26 19:57:04 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 19:57:04 crc kubenswrapper[4787]: > Jan 26 19:57:13 crc kubenswrapper[4787]: I0126 19:57:13.298663 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:57:13 crc kubenswrapper[4787]: I0126 19:57:13.364337 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:57:13 crc kubenswrapper[4787]: I0126 19:57:13.545998 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpw8j"] Jan 26 19:57:14 crc kubenswrapper[4787]: I0126 19:57:14.614169 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpw8j" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="registry-server" containerID="cri-o://0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b" gracePeriod=2 Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.167811 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.291703 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-catalog-content\") pod \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.291868 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd9fq\" (UniqueName: \"kubernetes.io/projected/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-kube-api-access-cd9fq\") pod \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.291897 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-utilities\") pod \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\" (UID: \"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb\") " Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.293288 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-utilities" (OuterVolumeSpecName: "utilities") pod "2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" (UID: "2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.300616 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-kube-api-access-cd9fq" (OuterVolumeSpecName: "kube-api-access-cd9fq") pod "2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" (UID: "2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb"). InnerVolumeSpecName "kube-api-access-cd9fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.395024 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd9fq\" (UniqueName: \"kubernetes.io/projected/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-kube-api-access-cd9fq\") on node \"crc\" DevicePath \"\"" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.395085 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.423138 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" (UID: "2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.497265 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.631302 4787 generic.go:334] "Generic (PLEG): container finished" podID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerID="0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b" exitCode=0 Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.631415 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpw8j" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.631416 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerDied","Data":"0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b"} Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.631696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpw8j" event={"ID":"2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb","Type":"ContainerDied","Data":"dec5aa3d9ffa471b5d52dfcd1d94779ea1da457030c1551f588aabe9f773f777"} Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.631726 4787 scope.go:117] "RemoveContainer" containerID="0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.658859 4787 scope.go:117] "RemoveContainer" containerID="4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.674508 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpw8j"] Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.687868 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpw8j"] Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.691171 4787 scope.go:117] "RemoveContainer" containerID="80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.739493 4787 scope.go:117] "RemoveContainer" containerID="0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b" Jan 26 19:57:15 crc kubenswrapper[4787]: E0126 19:57:15.739900 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b\": container with ID starting with 0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b not found: ID does not exist" containerID="0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.739967 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b"} err="failed to get container status \"0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b\": rpc error: code = NotFound desc = could not find container \"0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b\": container with ID starting with 0f155cf32b4d5cfaf4f04d704fdf04bcc5ace66d52e6fcbcb1877d51dcf14a6b not found: ID does not exist" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.739997 4787 scope.go:117] "RemoveContainer" containerID="4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee" Jan 26 19:57:15 crc kubenswrapper[4787]: E0126 19:57:15.740366 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee\": container with ID starting with 4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee not found: ID does not exist" containerID="4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.740401 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee"} err="failed to get container status \"4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee\": rpc error: code = NotFound desc = could not find container \"4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee\": container with ID starting with 4b2bc5bea28d39183f29c513eea8e845ea6a513d76123666e9b3e5b41e9489ee not found: ID does not exist" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.740426 4787 scope.go:117] "RemoveContainer" containerID="80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470" Jan 26 19:57:15 crc kubenswrapper[4787]: E0126 19:57:15.741105 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470\": container with ID starting with 80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470 not found: ID does not exist" containerID="80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470" Jan 26 19:57:15 crc kubenswrapper[4787]: I0126 19:57:15.741130 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470"} err="failed to get container status \"80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470\": rpc error: code = NotFound desc = could not find container \"80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470\": container with ID starting with 80aec420c67dd68e67c1449e18902d9d30aaa195a59bde07de5fd21a212d4470 not found: ID does not exist" Jan 26 19:57:17 crc kubenswrapper[4787]: I0126 19:57:17.612191 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" path="/var/lib/kubelet/pods/2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb/volumes" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.814160 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b9czb"] Jan 26 19:57:18 crc kubenswrapper[4787]: E0126 19:57:18.814679 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="extract-utilities" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.814696 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="extract-utilities" Jan 26 19:57:18 crc kubenswrapper[4787]: E0126 19:57:18.814762 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="registry-server" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.814775 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="registry-server" Jan 26 19:57:18 crc kubenswrapper[4787]: E0126 19:57:18.814798 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="extract-content" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.814810 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="extract-content" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.815232 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbf3a56-1f7d-425e-a7e0-7e920c5e54cb" containerName="registry-server" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.819392 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.840396 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9czb"] Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.975243 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f558d5-bd76-4704-9543-030ebc142baa-catalog-content\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.975565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f558d5-bd76-4704-9543-030ebc142baa-utilities\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:18 crc kubenswrapper[4787]: I0126 19:57:18.975610 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jhx\" (UniqueName: \"kubernetes.io/projected/13f558d5-bd76-4704-9543-030ebc142baa-kube-api-access-b5jhx\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.077926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f558d5-bd76-4704-9543-030ebc142baa-catalog-content\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.078007 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f558d5-bd76-4704-9543-030ebc142baa-utilities\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.078043 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5jhx\" (UniqueName: \"kubernetes.io/projected/13f558d5-bd76-4704-9543-030ebc142baa-kube-api-access-b5jhx\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.078462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f558d5-bd76-4704-9543-030ebc142baa-catalog-content\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.078711 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f558d5-bd76-4704-9543-030ebc142baa-utilities\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.099437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5jhx\" (UniqueName: \"kubernetes.io/projected/13f558d5-bd76-4704-9543-030ebc142baa-kube-api-access-b5jhx\") pod \"community-operators-b9czb\" (UID: \"13f558d5-bd76-4704-9543-030ebc142baa\") " pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.149311 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:19 crc kubenswrapper[4787]: I0126 19:57:19.669977 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9czb"] Jan 26 19:57:19 crc kubenswrapper[4787]: W0126 19:57:19.677012 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13f558d5_bd76_4704_9543_030ebc142baa.slice/crio-4e20cc182d45f7f6b01a4b1ae4c5eb06f24524eb2af01ea920793f8fab40b8a1 WatchSource:0}: Error finding container 4e20cc182d45f7f6b01a4b1ae4c5eb06f24524eb2af01ea920793f8fab40b8a1: Status 404 returned error can't find the container with id 4e20cc182d45f7f6b01a4b1ae4c5eb06f24524eb2af01ea920793f8fab40b8a1 Jan 26 19:57:20 crc kubenswrapper[4787]: I0126 19:57:20.691374 4787 generic.go:334] "Generic (PLEG): container finished" podID="13f558d5-bd76-4704-9543-030ebc142baa" containerID="349644ba0fe910fb3ea9ea5be0a97c17079247f7826e4c7ab8e6588a622d596f" exitCode=0 Jan 26 19:57:20 crc kubenswrapper[4787]: I0126 19:57:20.691465 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9czb" event={"ID":"13f558d5-bd76-4704-9543-030ebc142baa","Type":"ContainerDied","Data":"349644ba0fe910fb3ea9ea5be0a97c17079247f7826e4c7ab8e6588a622d596f"} Jan 26 19:57:20 crc kubenswrapper[4787]: I0126 19:57:20.691808 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9czb" event={"ID":"13f558d5-bd76-4704-9543-030ebc142baa","Type":"ContainerStarted","Data":"4e20cc182d45f7f6b01a4b1ae4c5eb06f24524eb2af01ea920793f8fab40b8a1"} Jan 26 19:57:20 crc kubenswrapper[4787]: I0126 19:57:20.694116 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:57:25 crc kubenswrapper[4787]: I0126 19:57:25.778732 4787 generic.go:334] "Generic (PLEG): container finished" podID="13f558d5-bd76-4704-9543-030ebc142baa" containerID="1c769fcd4537c7a93791a5dcead5e6ab8ec5b6c4685cafbffef19a5f3cc85553" exitCode=0 Jan 26 19:57:25 crc kubenswrapper[4787]: I0126 19:57:25.778828 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9czb" event={"ID":"13f558d5-bd76-4704-9543-030ebc142baa","Type":"ContainerDied","Data":"1c769fcd4537c7a93791a5dcead5e6ab8ec5b6c4685cafbffef19a5f3cc85553"} Jan 26 19:57:26 crc kubenswrapper[4787]: I0126 19:57:26.802291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9czb" event={"ID":"13f558d5-bd76-4704-9543-030ebc142baa","Type":"ContainerStarted","Data":"9b4b0b47fdeccd0b2fb44f7e6a19cbc3a586c8ac166c69f24ca52728fc054e01"} Jan 26 19:57:29 crc kubenswrapper[4787]: I0126 19:57:29.150453 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:29 crc kubenswrapper[4787]: I0126 19:57:29.151069 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:29 crc kubenswrapper[4787]: I0126 19:57:29.222683 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:29 crc kubenswrapper[4787]: I0126 19:57:29.255727 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b9czb" podStartSLOduration=5.488816996 podStartE2EDuration="11.255532901s" podCreationTimestamp="2026-01-26 19:57:18 +0000 UTC" firstStartedPulling="2026-01-26 19:57:20.693815248 +0000 UTC m=+8009.400951381" lastFinishedPulling="2026-01-26 19:57:26.460531123 +0000 UTC m=+8015.167667286" observedRunningTime="2026-01-26 19:57:26.827996204 +0000 UTC m=+8015.535132337" watchObservedRunningTime="2026-01-26 19:57:29.255532901 +0000 UTC m=+8017.962669064" Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.226630 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b9czb" Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.322350 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9czb"] Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.375767 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4pl9"] Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.376008 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4pl9" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="registry-server" containerID="cri-o://72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4" gracePeriod=2 Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.906776 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.979341 4787 generic.go:334] "Generic (PLEG): container finished" podID="73da2f06-dbb9-430a-8067-d5396afebf85" containerID="72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4" exitCode=0 Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.979852 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4pl9" event={"ID":"73da2f06-dbb9-430a-8067-d5396afebf85","Type":"ContainerDied","Data":"72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4"} Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.979905 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4pl9" Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.979936 4787 scope.go:117] "RemoveContainer" containerID="72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4" Jan 26 19:57:39 crc kubenswrapper[4787]: I0126 19:57:39.979925 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4pl9" event={"ID":"73da2f06-dbb9-430a-8067-d5396afebf85","Type":"ContainerDied","Data":"d9c529be4d49430d6fee5655165e4ef9ed094693e04e50ebc1de409d86ec4d1d"} Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.020296 4787 scope.go:117] "RemoveContainer" containerID="77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.068553 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-utilities\") pod \"73da2f06-dbb9-430a-8067-d5396afebf85\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.068734 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggfww\" (UniqueName: \"kubernetes.io/projected/73da2f06-dbb9-430a-8067-d5396afebf85-kube-api-access-ggfww\") pod \"73da2f06-dbb9-430a-8067-d5396afebf85\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.068933 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-catalog-content\") pod \"73da2f06-dbb9-430a-8067-d5396afebf85\" (UID: \"73da2f06-dbb9-430a-8067-d5396afebf85\") " Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.069825 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-utilities" (OuterVolumeSpecName: "utilities") pod "73da2f06-dbb9-430a-8067-d5396afebf85" (UID: "73da2f06-dbb9-430a-8067-d5396afebf85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.071705 4787 scope.go:117] "RemoveContainer" containerID="863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.074817 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73da2f06-dbb9-430a-8067-d5396afebf85-kube-api-access-ggfww" (OuterVolumeSpecName: "kube-api-access-ggfww") pod "73da2f06-dbb9-430a-8067-d5396afebf85" (UID: "73da2f06-dbb9-430a-8067-d5396afebf85"). InnerVolumeSpecName "kube-api-access-ggfww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.148287 4787 scope.go:117] "RemoveContainer" containerID="72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.150441 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73da2f06-dbb9-430a-8067-d5396afebf85" (UID: "73da2f06-dbb9-430a-8067-d5396afebf85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:57:40 crc kubenswrapper[4787]: E0126 19:57:40.151919 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4\": container with ID starting with 72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4 not found: ID does not exist" containerID="72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.151982 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4"} err="failed to get container status \"72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4\": rpc error: code = NotFound desc = could not find container \"72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4\": container with ID starting with 72daf0e328b008b404f59075a9ba7781a30b8a3fedca6362aafa55d4555e05f4 not found: ID does not exist" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.152020 4787 scope.go:117] "RemoveContainer" containerID="77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a" Jan 26 19:57:40 crc kubenswrapper[4787]: E0126 19:57:40.152471 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a\": container with ID starting with 77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a not found: ID does not exist" containerID="77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.152513 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a"} err="failed to get container status \"77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a\": rpc error: code = NotFound desc = could not find container \"77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a\": container with ID starting with 77cee7b7d66916174fe8cce68938d7632a552bf1d4336376afe03d21d4865f6a not found: ID does not exist" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.152540 4787 scope.go:117] "RemoveContainer" containerID="863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b" Jan 26 19:57:40 crc kubenswrapper[4787]: E0126 19:57:40.152809 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b\": container with ID starting with 863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b not found: ID does not exist" containerID="863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.152832 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b"} err="failed to get container status \"863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b\": rpc error: code = NotFound desc = could not find container \"863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b\": container with ID starting with 863cfe7a2db0da1768295c8b073be461e70556ce22239349a5ed0b2586f0198b not found: ID does not exist" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.172647 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggfww\" (UniqueName: \"kubernetes.io/projected/73da2f06-dbb9-430a-8067-d5396afebf85-kube-api-access-ggfww\") on node \"crc\" DevicePath \"\"" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.172680 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.172690 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73da2f06-dbb9-430a-8067-d5396afebf85-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.322765 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4pl9"] Jan 26 19:57:40 crc kubenswrapper[4787]: I0126 19:57:40.336063 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4pl9"] Jan 26 19:57:41 crc kubenswrapper[4787]: I0126 19:57:41.604530 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" path="/var/lib/kubelet/pods/73da2f06-dbb9-430a-8067-d5396afebf85/volumes" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.141540 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlj5b"] Jan 26 19:57:59 crc kubenswrapper[4787]: E0126 19:57:59.142732 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="extract-content" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.142750 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="extract-content" Jan 26 19:57:59 crc kubenswrapper[4787]: E0126 19:57:59.142781 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="extract-utilities" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.142794 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="extract-utilities" Jan 26 19:57:59 crc kubenswrapper[4787]: E0126 19:57:59.142816 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="registry-server" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.142825 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="registry-server" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.143162 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="73da2f06-dbb9-430a-8067-d5396afebf85" containerName="registry-server" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.145193 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.157803 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlj5b"] Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.212691 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqwh\" (UniqueName: \"kubernetes.io/projected/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-kube-api-access-bxqwh\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.213119 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-utilities\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.213489 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-catalog-content\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.315425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqwh\" (UniqueName: \"kubernetes.io/projected/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-kube-api-access-bxqwh\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.315510 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-utilities\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.315635 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-catalog-content\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.316187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-catalog-content\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.316196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-utilities\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.344022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqwh\" (UniqueName: \"kubernetes.io/projected/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-kube-api-access-bxqwh\") pod \"certified-operators-vlj5b\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:57:59 crc kubenswrapper[4787]: I0126 19:57:59.518867 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:00 crc kubenswrapper[4787]: I0126 19:58:00.060064 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlj5b"] Jan 26 19:58:00 crc kubenswrapper[4787]: I0126 19:58:00.544336 4787 generic.go:334] "Generic (PLEG): container finished" podID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerID="59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831" exitCode=0 Jan 26 19:58:00 crc kubenswrapper[4787]: I0126 19:58:00.544447 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerDied","Data":"59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831"} Jan 26 19:58:00 crc kubenswrapper[4787]: I0126 19:58:00.545430 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerStarted","Data":"e3f755de3ae875f8e84313c83b5ee024bbbb1c9202f7acf6e426d44f650a3d83"} Jan 26 19:58:01 crc kubenswrapper[4787]: I0126 19:58:01.559193 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerStarted","Data":"ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83"} Jan 26 19:58:02 crc kubenswrapper[4787]: I0126 19:58:02.571389 4787 generic.go:334] "Generic (PLEG): container finished" podID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerID="ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83" exitCode=0 Jan 26 19:58:02 crc kubenswrapper[4787]: I0126 19:58:02.571595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerDied","Data":"ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83"} Jan 26 19:58:03 crc kubenswrapper[4787]: I0126 19:58:03.587542 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerStarted","Data":"01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd"} Jan 26 19:58:03 crc kubenswrapper[4787]: I0126 19:58:03.626067 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlj5b" podStartSLOduration=2.030357998 podStartE2EDuration="4.626041203s" podCreationTimestamp="2026-01-26 19:57:59 +0000 UTC" firstStartedPulling="2026-01-26 19:58:00.546483204 +0000 UTC m=+8049.253619347" lastFinishedPulling="2026-01-26 19:58:03.142166379 +0000 UTC m=+8051.849302552" observedRunningTime="2026-01-26 19:58:03.615986847 +0000 UTC m=+8052.323123010" watchObservedRunningTime="2026-01-26 19:58:03.626041203 +0000 UTC m=+8052.333177346" Jan 26 19:58:09 crc kubenswrapper[4787]: I0126 19:58:09.518982 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:09 crc kubenswrapper[4787]: I0126 19:58:09.519566 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:09 crc kubenswrapper[4787]: I0126 19:58:09.571472 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:09 crc kubenswrapper[4787]: I0126 19:58:09.706297 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:09 crc kubenswrapper[4787]: I0126 19:58:09.812653 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlj5b"] Jan 26 19:58:11 crc kubenswrapper[4787]: I0126 19:58:11.686082 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlj5b" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="registry-server" containerID="cri-o://01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd" gracePeriod=2 Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.289476 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.426145 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-utilities\") pod \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.426263 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-catalog-content\") pod \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.426372 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxqwh\" (UniqueName: \"kubernetes.io/projected/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-kube-api-access-bxqwh\") pod \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\" (UID: \"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0\") " Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.426927 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-utilities" (OuterVolumeSpecName: "utilities") pod "4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" (UID: "4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.437292 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-kube-api-access-bxqwh" (OuterVolumeSpecName: "kube-api-access-bxqwh") pod "4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" (UID: "4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0"). InnerVolumeSpecName "kube-api-access-bxqwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.482699 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" (UID: "4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.529150 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.529186 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.529199 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxqwh\" (UniqueName: \"kubernetes.io/projected/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0-kube-api-access-bxqwh\") on node \"crc\" DevicePath \"\"" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.698868 4787 generic.go:334] "Generic (PLEG): container finished" podID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerID="01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd" exitCode=0 Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.698931 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerDied","Data":"01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd"} Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.699115 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlj5b" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.699144 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlj5b" event={"ID":"4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0","Type":"ContainerDied","Data":"e3f755de3ae875f8e84313c83b5ee024bbbb1c9202f7acf6e426d44f650a3d83"} Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.699207 4787 scope.go:117] "RemoveContainer" containerID="01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.753071 4787 scope.go:117] "RemoveContainer" containerID="ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.766165 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlj5b"] Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.779724 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlj5b"] Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.783809 4787 scope.go:117] "RemoveContainer" containerID="59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.861087 4787 scope.go:117] "RemoveContainer" containerID="01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd" Jan 26 19:58:12 crc kubenswrapper[4787]: E0126 19:58:12.861845 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd\": container with ID starting with 01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd not found: ID does not exist" containerID="01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.861893 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd"} err="failed to get container status \"01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd\": rpc error: code = NotFound desc = could not find container \"01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd\": container with ID starting with 01c4bf24765bfa8e45fc53b440113ef5ba1c21d1fbc3e4d23202039b9c1018fd not found: ID does not exist" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.861922 4787 scope.go:117] "RemoveContainer" containerID="ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83" Jan 26 19:58:12 crc kubenswrapper[4787]: E0126 19:58:12.862285 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83\": container with ID starting with ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83 not found: ID does not exist" containerID="ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.862405 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83"} err="failed to get container status \"ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83\": rpc error: code = NotFound desc = could not find container \"ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83\": container with ID starting with ae61f242ccb3f5306cb705f2e7bf34c77c6e3d412bcee27ec45974bee503da83 not found: ID does not exist" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.862500 4787 scope.go:117] "RemoveContainer" containerID="59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831" Jan 26 19:58:12 crc kubenswrapper[4787]: E0126 19:58:12.862936 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831\": container with ID starting with 59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831 not found: ID does not exist" containerID="59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831" Jan 26 19:58:12 crc kubenswrapper[4787]: I0126 19:58:12.863022 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831"} err="failed to get container status \"59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831\": rpc error: code = NotFound desc = could not find container \"59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831\": container with ID starting with 59d9a6ac09d0fd776e600df1c230759748841b19839eb431dad866946e9ff831 not found: ID does not exist" Jan 26 19:58:13 crc kubenswrapper[4787]: I0126 19:58:13.610497 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" path="/var/lib/kubelet/pods/4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0/volumes" Jan 26 19:58:46 crc kubenswrapper[4787]: I0126 19:58:46.808228 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:58:46 crc kubenswrapper[4787]: I0126 19:58:46.808824 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:59:16 crc kubenswrapper[4787]: I0126 19:59:16.808658 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:59:16 crc kubenswrapper[4787]: I0126 19:59:16.809177 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:59:23 crc kubenswrapper[4787]: I0126 19:59:23.482394 4787 generic.go:334] "Generic (PLEG): container finished" podID="83413341-3b4e-483b-9e6e-af2c64428fb1" containerID="0fe0fa3d5b79f53f04687b8e2dde8f007172c2482b8f46459f1c179fb1b479d4" exitCode=0 Jan 26 19:59:23 crc kubenswrapper[4787]: I0126 19:59:23.482496 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" event={"ID":"83413341-3b4e-483b-9e6e-af2c64428fb1","Type":"ContainerDied","Data":"0fe0fa3d5b79f53f04687b8e2dde8f007172c2482b8f46459f1c179fb1b479d4"} Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.081829 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259180 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ssh-key-openstack-cell1\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259238 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-0\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259312 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-0\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259362 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-1\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259430 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ceph\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259499 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-0\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259569 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-inventory\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259603 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-1\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259630 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvpjv\" (UniqueName: \"kubernetes.io/projected/83413341-3b4e-483b-9e6e-af2c64428fb1-kube-api-access-mvpjv\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259683 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-1\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.259714 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-combined-ca-bundle\") pod \"83413341-3b4e-483b-9e6e-af2c64428fb1\" (UID: \"83413341-3b4e-483b-9e6e-af2c64428fb1\") " Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.269021 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.269119 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ceph" (OuterVolumeSpecName: "ceph") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.271716 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83413341-3b4e-483b-9e6e-af2c64428fb1-kube-api-access-mvpjv" (OuterVolumeSpecName: "kube-api-access-mvpjv") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "kube-api-access-mvpjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.316615 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.320480 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.322801 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.327425 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-inventory" (OuterVolumeSpecName: "inventory") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.334800 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.340306 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.341639 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.343884 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "83413341-3b4e-483b-9e6e-af2c64428fb1" (UID: "83413341-3b4e-483b-9e6e-af2c64428fb1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362138 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362172 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362185 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362194 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362203 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvpjv\" (UniqueName: \"kubernetes.io/projected/83413341-3b4e-483b-9e6e-af2c64428fb1-kube-api-access-mvpjv\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362211 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362221 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362231 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362241 4787 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362248 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.362257 4787 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83413341-3b4e-483b-9e6e-af2c64428fb1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.515347 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" event={"ID":"83413341-3b4e-483b-9e6e-af2c64428fb1","Type":"ContainerDied","Data":"936e2776b998d2ba154d3c5a8ce51de837046df7f8f17148225f39ef9821ae35"} Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.516055 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936e2776b998d2ba154d3c5a8ce51de837046df7f8f17148225f39ef9821ae35" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.515469 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-hpfbg" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641065 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wbcsn"] Jan 26 19:59:25 crc kubenswrapper[4787]: E0126 19:59:25.641465 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="extract-utilities" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641484 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="extract-utilities" Jan 26 19:59:25 crc kubenswrapper[4787]: E0126 19:59:25.641499 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="registry-server" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641506 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="registry-server" Jan 26 19:59:25 crc kubenswrapper[4787]: E0126 19:59:25.641538 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="extract-content" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641549 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="extract-content" Jan 26 19:59:25 crc kubenswrapper[4787]: E0126 19:59:25.641579 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83413341-3b4e-483b-9e6e-af2c64428fb1" containerName="nova-cell1-openstack-openstack-cell1" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641586 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="83413341-3b4e-483b-9e6e-af2c64428fb1" containerName="nova-cell1-openstack-openstack-cell1" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641774 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aef5a27-7a4b-46c5-b21e-d49b3e71eaa0" containerName="registry-server" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.641790 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="83413341-3b4e-483b-9e6e-af2c64428fb1" containerName="nova-cell1-openstack-openstack-cell1" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.642494 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.645259 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.645643 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.646276 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.647026 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.654262 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.661080 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wbcsn"] Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.771252 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceph\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.771341 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-inventory\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.771385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.771637 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.771971 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2g4\" (UniqueName: \"kubernetes.io/projected/529c5af2-a36e-4826-88a7-0deed8d59e7c-kube-api-access-zl2g4\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.772196 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.772359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.772669 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874486 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874549 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874630 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceph\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-inventory\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874737 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874777 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.874806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2g4\" (UniqueName: \"kubernetes.io/projected/529c5af2-a36e-4826-88a7-0deed8d59e7c-kube-api-access-zl2g4\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.879102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.879257 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.879407 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.880088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceph\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.880179 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.881056 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.882811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-inventory\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.897917 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2g4\" (UniqueName: \"kubernetes.io/projected/529c5af2-a36e-4826-88a7-0deed8d59e7c-kube-api-access-zl2g4\") pod \"telemetry-openstack-openstack-cell1-wbcsn\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:25 crc kubenswrapper[4787]: I0126 19:59:25.973380 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 19:59:26 crc kubenswrapper[4787]: I0126 19:59:26.595198 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wbcsn"] Jan 26 19:59:27 crc kubenswrapper[4787]: I0126 19:59:27.544106 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" event={"ID":"529c5af2-a36e-4826-88a7-0deed8d59e7c","Type":"ContainerStarted","Data":"9bde2d65fe9213248d82720f41b6e48c364129905e8a269b88895413d1f96e2f"} Jan 26 19:59:27 crc kubenswrapper[4787]: I0126 19:59:27.544637 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" event={"ID":"529c5af2-a36e-4826-88a7-0deed8d59e7c","Type":"ContainerStarted","Data":"a0e386a3cbd3722313b076ac6c874495201dfe85919cfcc5c1fe7a925ce79253"} Jan 26 19:59:27 crc kubenswrapper[4787]: I0126 19:59:27.586625 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" podStartSLOduration=2.070445233 podStartE2EDuration="2.586601996s" podCreationTimestamp="2026-01-26 19:59:25 +0000 UTC" firstStartedPulling="2026-01-26 19:59:26.599506166 +0000 UTC m=+8135.306642299" lastFinishedPulling="2026-01-26 19:59:27.115662929 +0000 UTC m=+8135.822799062" observedRunningTime="2026-01-26 19:59:27.57290527 +0000 UTC m=+8136.280041413" watchObservedRunningTime="2026-01-26 19:59:27.586601996 +0000 UTC m=+8136.293738139" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.609857 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jqbjl"] Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.612460 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.636014 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqbjl"] Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.701120 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-catalog-content\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.701279 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-utilities\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.701316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szk89\" (UniqueName: \"kubernetes.io/projected/e03b479a-6e3c-41f1-98a0-bd22afce1511-kube-api-access-szk89\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.803320 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-utilities\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.803389 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szk89\" (UniqueName: \"kubernetes.io/projected/e03b479a-6e3c-41f1-98a0-bd22afce1511-kube-api-access-szk89\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.803442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-catalog-content\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.803984 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-catalog-content\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.804191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-utilities\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.832233 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szk89\" (UniqueName: \"kubernetes.io/projected/e03b479a-6e3c-41f1-98a0-bd22afce1511-kube-api-access-szk89\") pod \"redhat-marketplace-jqbjl\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:38 crc kubenswrapper[4787]: I0126 19:59:38.943997 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:39 crc kubenswrapper[4787]: I0126 19:59:39.443463 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqbjl"] Jan 26 19:59:39 crc kubenswrapper[4787]: W0126 19:59:39.449902 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode03b479a_6e3c_41f1_98a0_bd22afce1511.slice/crio-a93824b07986944527515e5e22208c04722fb098c70be6cd7ddfc946b5ddcdb9 WatchSource:0}: Error finding container a93824b07986944527515e5e22208c04722fb098c70be6cd7ddfc946b5ddcdb9: Status 404 returned error can't find the container with id a93824b07986944527515e5e22208c04722fb098c70be6cd7ddfc946b5ddcdb9 Jan 26 19:59:39 crc kubenswrapper[4787]: I0126 19:59:39.693764 4787 generic.go:334] "Generic (PLEG): container finished" podID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerID="74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16" exitCode=0 Jan 26 19:59:39 crc kubenswrapper[4787]: I0126 19:59:39.694726 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerDied","Data":"74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16"} Jan 26 19:59:39 crc kubenswrapper[4787]: I0126 19:59:39.695796 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerStarted","Data":"a93824b07986944527515e5e22208c04722fb098c70be6cd7ddfc946b5ddcdb9"} Jan 26 19:59:41 crc kubenswrapper[4787]: I0126 19:59:41.714197 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerStarted","Data":"97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a"} Jan 26 19:59:42 crc kubenswrapper[4787]: I0126 19:59:42.730132 4787 generic.go:334] "Generic (PLEG): container finished" podID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerID="97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a" exitCode=0 Jan 26 19:59:42 crc kubenswrapper[4787]: I0126 19:59:42.730252 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerDied","Data":"97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a"} Jan 26 19:59:43 crc kubenswrapper[4787]: I0126 19:59:43.747231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerStarted","Data":"fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95"} Jan 26 19:59:43 crc kubenswrapper[4787]: I0126 19:59:43.773656 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jqbjl" podStartSLOduration=2.356651569 podStartE2EDuration="5.773637483s" podCreationTimestamp="2026-01-26 19:59:38 +0000 UTC" firstStartedPulling="2026-01-26 19:59:39.698156718 +0000 UTC m=+8148.405292861" lastFinishedPulling="2026-01-26 19:59:43.115142632 +0000 UTC m=+8151.822278775" observedRunningTime="2026-01-26 19:59:43.771107521 +0000 UTC m=+8152.478243674" watchObservedRunningTime="2026-01-26 19:59:43.773637483 +0000 UTC m=+8152.480773606" Jan 26 19:59:46 crc kubenswrapper[4787]: I0126 19:59:46.808551 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:59:46 crc kubenswrapper[4787]: I0126 19:59:46.809208 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:59:46 crc kubenswrapper[4787]: I0126 19:59:46.809275 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 19:59:46 crc kubenswrapper[4787]: I0126 19:59:46.810344 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:59:46 crc kubenswrapper[4787]: I0126 19:59:46.810427 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" gracePeriod=600 Jan 26 19:59:46 crc kubenswrapper[4787]: E0126 19:59:46.967682 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:59:47 crc kubenswrapper[4787]: I0126 19:59:47.801428 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" exitCode=0 Jan 26 19:59:47 crc kubenswrapper[4787]: I0126 19:59:47.801492 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d"} Jan 26 19:59:47 crc kubenswrapper[4787]: I0126 19:59:47.801731 4787 scope.go:117] "RemoveContainer" containerID="d010f5f239def42b6ea0f33cf19bd1310f287ce5db4c3500db50792cb479b5e1" Jan 26 19:59:47 crc kubenswrapper[4787]: I0126 19:59:47.802513 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 19:59:47 crc kubenswrapper[4787]: E0126 19:59:47.802839 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 19:59:48 crc kubenswrapper[4787]: I0126 19:59:48.945203 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:48 crc kubenswrapper[4787]: I0126 19:59:48.945541 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:49 crc kubenswrapper[4787]: I0126 19:59:49.015843 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:49 crc kubenswrapper[4787]: I0126 19:59:49.892621 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:49 crc kubenswrapper[4787]: I0126 19:59:49.956492 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqbjl"] Jan 26 19:59:51 crc kubenswrapper[4787]: I0126 19:59:51.848324 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jqbjl" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="registry-server" containerID="cri-o://fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95" gracePeriod=2 Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.451485 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.643715 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-utilities\") pod \"e03b479a-6e3c-41f1-98a0-bd22afce1511\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.643877 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-catalog-content\") pod \"e03b479a-6e3c-41f1-98a0-bd22afce1511\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.644036 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szk89\" (UniqueName: \"kubernetes.io/projected/e03b479a-6e3c-41f1-98a0-bd22afce1511-kube-api-access-szk89\") pod \"e03b479a-6e3c-41f1-98a0-bd22afce1511\" (UID: \"e03b479a-6e3c-41f1-98a0-bd22afce1511\") " Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.644896 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-utilities" (OuterVolumeSpecName: "utilities") pod "e03b479a-6e3c-41f1-98a0-bd22afce1511" (UID: "e03b479a-6e3c-41f1-98a0-bd22afce1511"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.650221 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03b479a-6e3c-41f1-98a0-bd22afce1511-kube-api-access-szk89" (OuterVolumeSpecName: "kube-api-access-szk89") pod "e03b479a-6e3c-41f1-98a0-bd22afce1511" (UID: "e03b479a-6e3c-41f1-98a0-bd22afce1511"). InnerVolumeSpecName "kube-api-access-szk89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.665117 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e03b479a-6e3c-41f1-98a0-bd22afce1511" (UID: "e03b479a-6e3c-41f1-98a0-bd22afce1511"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.747892 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.747930 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03b479a-6e3c-41f1-98a0-bd22afce1511-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.747965 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szk89\" (UniqueName: \"kubernetes.io/projected/e03b479a-6e3c-41f1-98a0-bd22afce1511-kube-api-access-szk89\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.864712 4787 generic.go:334] "Generic (PLEG): container finished" podID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerID="fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95" exitCode=0 Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.864767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerDied","Data":"fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95"} Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.864800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqbjl" event={"ID":"e03b479a-6e3c-41f1-98a0-bd22afce1511","Type":"ContainerDied","Data":"a93824b07986944527515e5e22208c04722fb098c70be6cd7ddfc946b5ddcdb9"} Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.864821 4787 scope.go:117] "RemoveContainer" containerID="fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.864841 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqbjl" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.905833 4787 scope.go:117] "RemoveContainer" containerID="97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.923897 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqbjl"] Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.947190 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqbjl"] Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.966587 4787 scope.go:117] "RemoveContainer" containerID="74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.991788 4787 scope.go:117] "RemoveContainer" containerID="fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95" Jan 26 19:59:52 crc kubenswrapper[4787]: E0126 19:59:52.992884 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95\": container with ID starting with fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95 not found: ID does not exist" containerID="fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.992962 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95"} err="failed to get container status \"fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95\": rpc error: code = NotFound desc = could not find container \"fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95\": container with ID starting with fe80e2ddff2d87b1be42df11efa5294595492f6a1e7a6292b8e8737da9be1e95 not found: ID does not exist" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.992998 4787 scope.go:117] "RemoveContainer" containerID="97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a" Jan 26 19:59:52 crc kubenswrapper[4787]: E0126 19:59:52.993470 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a\": container with ID starting with 97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a not found: ID does not exist" containerID="97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.993533 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a"} err="failed to get container status \"97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a\": rpc error: code = NotFound desc = could not find container \"97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a\": container with ID starting with 97b73f59464d299da87b384e9591a9b216bf7c1b1dc53d881fbf0c21e1f8e32a not found: ID does not exist" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.993562 4787 scope.go:117] "RemoveContainer" containerID="74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16" Jan 26 19:59:52 crc kubenswrapper[4787]: E0126 19:59:52.993896 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16\": container with ID starting with 74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16 not found: ID does not exist" containerID="74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16" Jan 26 19:59:52 crc kubenswrapper[4787]: I0126 19:59:52.993927 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16"} err="failed to get container status \"74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16\": rpc error: code = NotFound desc = could not find container \"74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16\": container with ID starting with 74ba323e29fa1cf7408fdad23bec88751550a8d5d5db8b6ca3312b26f32eeb16 not found: ID does not exist" Jan 26 19:59:53 crc kubenswrapper[4787]: I0126 19:59:53.609700 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" path="/var/lib/kubelet/pods/e03b479a-6e3c-41f1-98a0-bd22afce1511/volumes" Jan 26 19:59:59 crc kubenswrapper[4787]: I0126 19:59:59.591015 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 19:59:59 crc kubenswrapper[4787]: E0126 19:59:59.592002 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.169825 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9"] Jan 26 20:00:00 crc kubenswrapper[4787]: E0126 20:00:00.170460 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="extract-content" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.170486 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="extract-content" Jan 26 20:00:00 crc kubenswrapper[4787]: E0126 20:00:00.170536 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="extract-utilities" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.170548 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="extract-utilities" Jan 26 20:00:00 crc kubenswrapper[4787]: E0126 20:00:00.170570 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.170582 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.170902 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03b479a-6e3c-41f1-98a0-bd22afce1511" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.172425 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.176056 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.176290 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.183087 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9"] Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.342781 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwwx\" (UniqueName: \"kubernetes.io/projected/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-kube-api-access-qnwwx\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.343227 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-secret-volume\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.343308 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-config-volume\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.445577 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-secret-volume\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.446033 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-config-volume\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.446208 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnwwx\" (UniqueName: \"kubernetes.io/projected/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-kube-api-access-qnwwx\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.446821 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-config-volume\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.452303 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-secret-volume\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.465706 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnwwx\" (UniqueName: \"kubernetes.io/projected/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-kube-api-access-qnwwx\") pod \"collect-profiles-29490960-hcqc9\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:00 crc kubenswrapper[4787]: I0126 20:00:00.504959 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:01 crc kubenswrapper[4787]: I0126 20:00:01.011845 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9"] Jan 26 20:00:01 crc kubenswrapper[4787]: E0126 20:00:01.743076 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc9e8213_5323_448c_aba5_5d61ba2a2c6b.slice/crio-conmon-8fcba340cb787efa5a6d2e613fead80d7f0c993c9bc8601be604ca7e51ed94b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc9e8213_5323_448c_aba5_5d61ba2a2c6b.slice/crio-8fcba340cb787efa5a6d2e613fead80d7f0c993c9bc8601be604ca7e51ed94b1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 20:00:01 crc kubenswrapper[4787]: I0126 20:00:01.973383 4787 generic.go:334] "Generic (PLEG): container finished" podID="bc9e8213-5323-448c-aba5-5d61ba2a2c6b" containerID="8fcba340cb787efa5a6d2e613fead80d7f0c993c9bc8601be604ca7e51ed94b1" exitCode=0 Jan 26 20:00:01 crc kubenswrapper[4787]: I0126 20:00:01.973495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" event={"ID":"bc9e8213-5323-448c-aba5-5d61ba2a2c6b","Type":"ContainerDied","Data":"8fcba340cb787efa5a6d2e613fead80d7f0c993c9bc8601be604ca7e51ed94b1"} Jan 26 20:00:01 crc kubenswrapper[4787]: I0126 20:00:01.973791 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" event={"ID":"bc9e8213-5323-448c-aba5-5d61ba2a2c6b","Type":"ContainerStarted","Data":"28ce3e3752eec0cb8cd546c81b19a7ec499ffacb61e6af88ab073bb0be4484aa"} Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.464203 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.626009 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-config-volume\") pod \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.626250 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnwwx\" (UniqueName: \"kubernetes.io/projected/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-kube-api-access-qnwwx\") pod \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.626342 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-secret-volume\") pod \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\" (UID: \"bc9e8213-5323-448c-aba5-5d61ba2a2c6b\") " Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.626557 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc9e8213-5323-448c-aba5-5d61ba2a2c6b" (UID: "bc9e8213-5323-448c-aba5-5d61ba2a2c6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.627726 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.633145 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc9e8213-5323-448c-aba5-5d61ba2a2c6b" (UID: "bc9e8213-5323-448c-aba5-5d61ba2a2c6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.633564 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-kube-api-access-qnwwx" (OuterVolumeSpecName: "kube-api-access-qnwwx") pod "bc9e8213-5323-448c-aba5-5d61ba2a2c6b" (UID: "bc9e8213-5323-448c-aba5-5d61ba2a2c6b"). InnerVolumeSpecName "kube-api-access-qnwwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.730026 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.730351 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnwwx\" (UniqueName: \"kubernetes.io/projected/bc9e8213-5323-448c-aba5-5d61ba2a2c6b-kube-api-access-qnwwx\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.998475 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" event={"ID":"bc9e8213-5323-448c-aba5-5d61ba2a2c6b","Type":"ContainerDied","Data":"28ce3e3752eec0cb8cd546c81b19a7ec499ffacb61e6af88ab073bb0be4484aa"} Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.998514 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28ce3e3752eec0cb8cd546c81b19a7ec499ffacb61e6af88ab073bb0be4484aa" Jan 26 20:00:03 crc kubenswrapper[4787]: I0126 20:00:03.998601 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-hcqc9" Jan 26 20:00:04 crc kubenswrapper[4787]: I0126 20:00:04.556362 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9"] Jan 26 20:00:04 crc kubenswrapper[4787]: I0126 20:00:04.565110 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-sdft9"] Jan 26 20:00:05 crc kubenswrapper[4787]: I0126 20:00:05.610852 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343a5b16-6144-4761-8e9c-b4afbeb3dc2d" path="/var/lib/kubelet/pods/343a5b16-6144-4761-8e9c-b4afbeb3dc2d/volumes" Jan 26 20:00:11 crc kubenswrapper[4787]: I0126 20:00:11.630791 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:00:11 crc kubenswrapper[4787]: E0126 20:00:11.634537 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:00:22 crc kubenswrapper[4787]: I0126 20:00:22.589490 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:00:22 crc kubenswrapper[4787]: E0126 20:00:22.590618 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:00:34 crc kubenswrapper[4787]: I0126 20:00:34.589940 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:00:34 crc kubenswrapper[4787]: E0126 20:00:34.591275 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:00:47 crc kubenswrapper[4787]: I0126 20:00:47.589909 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:00:47 crc kubenswrapper[4787]: E0126 20:00:47.591597 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.165045 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490961-2c4cr"] Jan 26 20:01:00 crc kubenswrapper[4787]: E0126 20:01:00.169549 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9e8213-5323-448c-aba5-5d61ba2a2c6b" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.169598 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9e8213-5323-448c-aba5-5d61ba2a2c6b" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.169891 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9e8213-5323-448c-aba5-5d61ba2a2c6b" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.170778 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.188777 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490961-2c4cr"] Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.358393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-config-data\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.358450 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8df\" (UniqueName: \"kubernetes.io/projected/3245f462-9672-43fa-9589-1eaf84a33fa7-kube-api-access-pr8df\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.358482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-fernet-keys\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.358734 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-combined-ca-bundle\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.461082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-config-data\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.461172 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8df\" (UniqueName: \"kubernetes.io/projected/3245f462-9672-43fa-9589-1eaf84a33fa7-kube-api-access-pr8df\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.461221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-fernet-keys\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.461326 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-combined-ca-bundle\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.469354 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-combined-ca-bundle\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.470104 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-fernet-keys\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.470379 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-config-data\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.484553 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8df\" (UniqueName: \"kubernetes.io/projected/3245f462-9672-43fa-9589-1eaf84a33fa7-kube-api-access-pr8df\") pod \"keystone-cron-29490961-2c4cr\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:00 crc kubenswrapper[4787]: I0126 20:01:00.494477 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:01 crc kubenswrapper[4787]: I0126 20:01:01.068758 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490961-2c4cr"] Jan 26 20:01:01 crc kubenswrapper[4787]: I0126 20:01:01.695678 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-2c4cr" event={"ID":"3245f462-9672-43fa-9589-1eaf84a33fa7","Type":"ContainerStarted","Data":"3462a2c0f0d886d31859c4405f832cf515951dc5d259ccce0a30ab677eb981ae"} Jan 26 20:01:01 crc kubenswrapper[4787]: I0126 20:01:01.696305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-2c4cr" event={"ID":"3245f462-9672-43fa-9589-1eaf84a33fa7","Type":"ContainerStarted","Data":"58e1cfb91e5de125167427ca1c026c9ca475f7a530948f73a738d1d99c04650d"} Jan 26 20:01:02 crc kubenswrapper[4787]: I0126 20:01:02.589724 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:01:02 crc kubenswrapper[4787]: E0126 20:01:02.590065 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:01:04 crc kubenswrapper[4787]: I0126 20:01:04.534499 4787 scope.go:117] "RemoveContainer" containerID="4445532f91802a828f0c8250991b94cef544289b0bd7af1f0422407877daa91e" Jan 26 20:01:04 crc kubenswrapper[4787]: I0126 20:01:04.727650 4787 generic.go:334] "Generic (PLEG): container finished" podID="3245f462-9672-43fa-9589-1eaf84a33fa7" containerID="3462a2c0f0d886d31859c4405f832cf515951dc5d259ccce0a30ab677eb981ae" exitCode=0 Jan 26 20:01:04 crc kubenswrapper[4787]: I0126 20:01:04.727718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-2c4cr" event={"ID":"3245f462-9672-43fa-9589-1eaf84a33fa7","Type":"ContainerDied","Data":"3462a2c0f0d886d31859c4405f832cf515951dc5d259ccce0a30ab677eb981ae"} Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.219554 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.300932 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-combined-ca-bundle\") pod \"3245f462-9672-43fa-9589-1eaf84a33fa7\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.301074 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-config-data\") pod \"3245f462-9672-43fa-9589-1eaf84a33fa7\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.301220 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr8df\" (UniqueName: \"kubernetes.io/projected/3245f462-9672-43fa-9589-1eaf84a33fa7-kube-api-access-pr8df\") pod \"3245f462-9672-43fa-9589-1eaf84a33fa7\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.301308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-fernet-keys\") pod \"3245f462-9672-43fa-9589-1eaf84a33fa7\" (UID: \"3245f462-9672-43fa-9589-1eaf84a33fa7\") " Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.308218 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3245f462-9672-43fa-9589-1eaf84a33fa7-kube-api-access-pr8df" (OuterVolumeSpecName: "kube-api-access-pr8df") pod "3245f462-9672-43fa-9589-1eaf84a33fa7" (UID: "3245f462-9672-43fa-9589-1eaf84a33fa7"). InnerVolumeSpecName "kube-api-access-pr8df". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.311155 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3245f462-9672-43fa-9589-1eaf84a33fa7" (UID: "3245f462-9672-43fa-9589-1eaf84a33fa7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.346209 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3245f462-9672-43fa-9589-1eaf84a33fa7" (UID: "3245f462-9672-43fa-9589-1eaf84a33fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.403167 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-config-data" (OuterVolumeSpecName: "config-data") pod "3245f462-9672-43fa-9589-1eaf84a33fa7" (UID: "3245f462-9672-43fa-9589-1eaf84a33fa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.404015 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr8df\" (UniqueName: \"kubernetes.io/projected/3245f462-9672-43fa-9589-1eaf84a33fa7-kube-api-access-pr8df\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.404232 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.404298 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.404346 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3245f462-9672-43fa-9589-1eaf84a33fa7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.760233 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-2c4cr" event={"ID":"3245f462-9672-43fa-9589-1eaf84a33fa7","Type":"ContainerDied","Data":"58e1cfb91e5de125167427ca1c026c9ca475f7a530948f73a738d1d99c04650d"} Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.760773 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e1cfb91e5de125167427ca1c026c9ca475f7a530948f73a738d1d99c04650d" Jan 26 20:01:06 crc kubenswrapper[4787]: I0126 20:01:06.760891 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-2c4cr" Jan 26 20:01:16 crc kubenswrapper[4787]: I0126 20:01:16.591240 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:01:16 crc kubenswrapper[4787]: E0126 20:01:16.592158 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:01:28 crc kubenswrapper[4787]: I0126 20:01:28.589993 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:01:28 crc kubenswrapper[4787]: E0126 20:01:28.590822 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:01:41 crc kubenswrapper[4787]: I0126 20:01:41.598906 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:01:41 crc kubenswrapper[4787]: E0126 20:01:41.600072 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:01:56 crc kubenswrapper[4787]: I0126 20:01:56.590158 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:01:56 crc kubenswrapper[4787]: E0126 20:01:56.590905 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:02:09 crc kubenswrapper[4787]: I0126 20:02:09.591341 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:02:09 crc kubenswrapper[4787]: E0126 20:02:09.592169 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:02:22 crc kubenswrapper[4787]: I0126 20:02:22.590145 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:02:22 crc kubenswrapper[4787]: E0126 20:02:22.591385 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:02:36 crc kubenswrapper[4787]: I0126 20:02:36.589601 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:02:36 crc kubenswrapper[4787]: E0126 20:02:36.590312 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:02:45 crc kubenswrapper[4787]: I0126 20:02:45.994837 4787 generic.go:334] "Generic (PLEG): container finished" podID="529c5af2-a36e-4826-88a7-0deed8d59e7c" containerID="9bde2d65fe9213248d82720f41b6e48c364129905e8a269b88895413d1f96e2f" exitCode=0 Jan 26 20:02:45 crc kubenswrapper[4787]: I0126 20:02:45.994936 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" event={"ID":"529c5af2-a36e-4826-88a7-0deed8d59e7c","Type":"ContainerDied","Data":"9bde2d65fe9213248d82720f41b6e48c364129905e8a269b88895413d1f96e2f"} Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.543635 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.603578 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:02:47 crc kubenswrapper[4787]: E0126 20:02:47.604074 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657270 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl2g4\" (UniqueName: \"kubernetes.io/projected/529c5af2-a36e-4826-88a7-0deed8d59e7c-kube-api-access-zl2g4\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657397 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceph\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-1\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657650 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-0\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657680 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ssh-key-openstack-cell1\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657715 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-inventory\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657838 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-telemetry-combined-ca-bundle\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.657991 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-2\") pod \"529c5af2-a36e-4826-88a7-0deed8d59e7c\" (UID: \"529c5af2-a36e-4826-88a7-0deed8d59e7c\") " Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.667748 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceph" (OuterVolumeSpecName: "ceph") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.668308 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.677877 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529c5af2-a36e-4826-88a7-0deed8d59e7c-kube-api-access-zl2g4" (OuterVolumeSpecName: "kube-api-access-zl2g4") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "kube-api-access-zl2g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.689896 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.691583 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.695562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.697450 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.698408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-inventory" (OuterVolumeSpecName: "inventory") pod "529c5af2-a36e-4826-88a7-0deed8d59e7c" (UID: "529c5af2-a36e-4826-88a7-0deed8d59e7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.762913 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763149 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763227 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763297 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763363 4787 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763460 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763549 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl2g4\" (UniqueName: \"kubernetes.io/projected/529c5af2-a36e-4826-88a7-0deed8d59e7c-kube-api-access-zl2g4\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:47 crc kubenswrapper[4787]: I0126 20:02:47.763645 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/529c5af2-a36e-4826-88a7-0deed8d59e7c-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.024842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" event={"ID":"529c5af2-a36e-4826-88a7-0deed8d59e7c","Type":"ContainerDied","Data":"a0e386a3cbd3722313b076ac6c874495201dfe85919cfcc5c1fe7a925ce79253"} Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.024915 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e386a3cbd3722313b076ac6c874495201dfe85919cfcc5c1fe7a925ce79253" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.025442 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wbcsn" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.138837 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-v7w5x"] Jan 26 20:02:48 crc kubenswrapper[4787]: E0126 20:02:48.139336 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3245f462-9672-43fa-9589-1eaf84a33fa7" containerName="keystone-cron" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.139363 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3245f462-9672-43fa-9589-1eaf84a33fa7" containerName="keystone-cron" Jan 26 20:02:48 crc kubenswrapper[4787]: E0126 20:02:48.139407 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529c5af2-a36e-4826-88a7-0deed8d59e7c" containerName="telemetry-openstack-openstack-cell1" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.139417 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="529c5af2-a36e-4826-88a7-0deed8d59e7c" containerName="telemetry-openstack-openstack-cell1" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.139816 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3245f462-9672-43fa-9589-1eaf84a33fa7" containerName="keystone-cron" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.139841 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="529c5af2-a36e-4826-88a7-0deed8d59e7c" containerName="telemetry-openstack-openstack-cell1" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.140581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.148185 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.148193 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.148511 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.148680 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.149816 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.157314 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-v7w5x"] Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.273715 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.274006 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.274198 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.274223 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rrn\" (UniqueName: \"kubernetes.io/projected/435fa74c-76cb-40b5-b78d-479aeed03ddd-kube-api-access-n8rrn\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.274263 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.274403 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.377642 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.377738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.378035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.378072 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8rrn\" (UniqueName: \"kubernetes.io/projected/435fa74c-76cb-40b5-b78d-479aeed03ddd-kube-api-access-n8rrn\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.378138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.378212 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.390076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.396535 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.401533 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.407497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8rrn\" (UniqueName: \"kubernetes.io/projected/435fa74c-76cb-40b5-b78d-479aeed03ddd-kube-api-access-n8rrn\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.412596 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.413721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-v7w5x\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:48 crc kubenswrapper[4787]: I0126 20:02:48.467880 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:02:49 crc kubenswrapper[4787]: I0126 20:02:49.142422 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-v7w5x"] Jan 26 20:02:49 crc kubenswrapper[4787]: I0126 20:02:49.144173 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:02:50 crc kubenswrapper[4787]: I0126 20:02:50.048247 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" event={"ID":"435fa74c-76cb-40b5-b78d-479aeed03ddd","Type":"ContainerStarted","Data":"d1f3cc60ecd4c714f55453a16f895f0b11dcea80e2731cd3b10cdb5658be3036"} Jan 26 20:02:50 crc kubenswrapper[4787]: I0126 20:02:50.048853 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" event={"ID":"435fa74c-76cb-40b5-b78d-479aeed03ddd","Type":"ContainerStarted","Data":"d6c618d688aeccb1c6b21849b83dd1ec6772803b3a54e18789b384e46266b05e"} Jan 26 20:02:50 crc kubenswrapper[4787]: I0126 20:02:50.073126 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" podStartSLOduration=1.576657688 podStartE2EDuration="2.073101328s" podCreationTimestamp="2026-01-26 20:02:48 +0000 UTC" firstStartedPulling="2026-01-26 20:02:49.143980403 +0000 UTC m=+8337.851116536" lastFinishedPulling="2026-01-26 20:02:49.640424013 +0000 UTC m=+8338.347560176" observedRunningTime="2026-01-26 20:02:50.066794834 +0000 UTC m=+8338.773930957" watchObservedRunningTime="2026-01-26 20:02:50.073101328 +0000 UTC m=+8338.780237471" Jan 26 20:03:02 crc kubenswrapper[4787]: I0126 20:03:02.589860 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:03:02 crc kubenswrapper[4787]: E0126 20:03:02.590905 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:03:13 crc kubenswrapper[4787]: I0126 20:03:13.589796 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:03:13 crc kubenswrapper[4787]: E0126 20:03:13.590942 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:03:25 crc kubenswrapper[4787]: I0126 20:03:25.590182 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:03:25 crc kubenswrapper[4787]: E0126 20:03:25.591257 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:03:37 crc kubenswrapper[4787]: I0126 20:03:37.590329 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:03:37 crc kubenswrapper[4787]: E0126 20:03:37.591982 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:03:51 crc kubenswrapper[4787]: I0126 20:03:51.618983 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:03:51 crc kubenswrapper[4787]: E0126 20:03:51.620204 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:03:56 crc kubenswrapper[4787]: I0126 20:03:56.881300 4787 generic.go:334] "Generic (PLEG): container finished" podID="435fa74c-76cb-40b5-b78d-479aeed03ddd" containerID="d1f3cc60ecd4c714f55453a16f895f0b11dcea80e2731cd3b10cdb5658be3036" exitCode=0 Jan 26 20:03:56 crc kubenswrapper[4787]: I0126 20:03:56.881380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" event={"ID":"435fa74c-76cb-40b5-b78d-479aeed03ddd","Type":"ContainerDied","Data":"d1f3cc60ecd4c714f55453a16f895f0b11dcea80e2731cd3b10cdb5658be3036"} Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.384188 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.511709 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ceph\") pod \"435fa74c-76cb-40b5-b78d-479aeed03ddd\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.511786 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-inventory\") pod \"435fa74c-76cb-40b5-b78d-479aeed03ddd\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.511857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-agent-neutron-config-0\") pod \"435fa74c-76cb-40b5-b78d-479aeed03ddd\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.511937 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ssh-key-openstack-cell1\") pod \"435fa74c-76cb-40b5-b78d-479aeed03ddd\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.512095 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8rrn\" (UniqueName: \"kubernetes.io/projected/435fa74c-76cb-40b5-b78d-479aeed03ddd-kube-api-access-n8rrn\") pod \"435fa74c-76cb-40b5-b78d-479aeed03ddd\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.512308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-combined-ca-bundle\") pod \"435fa74c-76cb-40b5-b78d-479aeed03ddd\" (UID: \"435fa74c-76cb-40b5-b78d-479aeed03ddd\") " Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.518317 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "435fa74c-76cb-40b5-b78d-479aeed03ddd" (UID: "435fa74c-76cb-40b5-b78d-479aeed03ddd"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.519187 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435fa74c-76cb-40b5-b78d-479aeed03ddd-kube-api-access-n8rrn" (OuterVolumeSpecName: "kube-api-access-n8rrn") pod "435fa74c-76cb-40b5-b78d-479aeed03ddd" (UID: "435fa74c-76cb-40b5-b78d-479aeed03ddd"). InnerVolumeSpecName "kube-api-access-n8rrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.532474 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ceph" (OuterVolumeSpecName: "ceph") pod "435fa74c-76cb-40b5-b78d-479aeed03ddd" (UID: "435fa74c-76cb-40b5-b78d-479aeed03ddd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.548719 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-inventory" (OuterVolumeSpecName: "inventory") pod "435fa74c-76cb-40b5-b78d-479aeed03ddd" (UID: "435fa74c-76cb-40b5-b78d-479aeed03ddd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.555793 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "435fa74c-76cb-40b5-b78d-479aeed03ddd" (UID: "435fa74c-76cb-40b5-b78d-479aeed03ddd"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.558463 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "435fa74c-76cb-40b5-b78d-479aeed03ddd" (UID: "435fa74c-76cb-40b5-b78d-479aeed03ddd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.615524 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.615562 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.615575 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.615584 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.615593 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/435fa74c-76cb-40b5-b78d-479aeed03ddd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.615603 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8rrn\" (UniqueName: \"kubernetes.io/projected/435fa74c-76cb-40b5-b78d-479aeed03ddd-kube-api-access-n8rrn\") on node \"crc\" DevicePath \"\"" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.906473 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" event={"ID":"435fa74c-76cb-40b5-b78d-479aeed03ddd","Type":"ContainerDied","Data":"d6c618d688aeccb1c6b21849b83dd1ec6772803b3a54e18789b384e46266b05e"} Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.906531 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c618d688aeccb1c6b21849b83dd1ec6772803b3a54e18789b384e46266b05e" Jan 26 20:03:58 crc kubenswrapper[4787]: I0126 20:03:58.906552 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-v7w5x" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.060996 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm"] Jan 26 20:03:59 crc kubenswrapper[4787]: E0126 20:03:59.062488 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435fa74c-76cb-40b5-b78d-479aeed03ddd" containerName="neutron-sriov-openstack-openstack-cell1" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.062519 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="435fa74c-76cb-40b5-b78d-479aeed03ddd" containerName="neutron-sriov-openstack-openstack-cell1" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.063163 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="435fa74c-76cb-40b5-b78d-479aeed03ddd" containerName="neutron-sriov-openstack-openstack-cell1" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.064513 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.074636 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.074682 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.074923 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.074925 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.076300 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.097590 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm"] Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.239728 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.239835 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.239895 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.239925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwh7\" (UniqueName: \"kubernetes.io/projected/69b66133-0df1-4b99-b902-4197ea7b9bb7-kube-api-access-rhwh7\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.239964 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.240136 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.342455 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.342589 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.342638 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.342700 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.342762 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwh7\" (UniqueName: \"kubernetes.io/projected/69b66133-0df1-4b99-b902-4197ea7b9bb7-kube-api-access-rhwh7\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.342796 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.352747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.353221 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.353702 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.354455 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.358721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.378014 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwh7\" (UniqueName: \"kubernetes.io/projected/69b66133-0df1-4b99-b902-4197ea7b9bb7-kube-api-access-rhwh7\") pod \"neutron-dhcp-openstack-openstack-cell1-zbwbm\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:03:59 crc kubenswrapper[4787]: I0126 20:03:59.404325 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:04:00 crc kubenswrapper[4787]: I0126 20:04:00.016107 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm"] Jan 26 20:04:00 crc kubenswrapper[4787]: I0126 20:04:00.932675 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" event={"ID":"69b66133-0df1-4b99-b902-4197ea7b9bb7","Type":"ContainerStarted","Data":"86a87aed3d796c799c084112c5fb10c69eb3b27b6c6d2f145a24b6f080741539"} Jan 26 20:04:00 crc kubenswrapper[4787]: I0126 20:04:00.933034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" event={"ID":"69b66133-0df1-4b99-b902-4197ea7b9bb7","Type":"ContainerStarted","Data":"d5f0f5b4ec7126d8e6d0a24501d50d23fcc0a809ff510eeda27bf474c22076eb"} Jan 26 20:04:00 crc kubenswrapper[4787]: I0126 20:04:00.958143 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" podStartSLOduration=1.476521413 podStartE2EDuration="1.958123578s" podCreationTimestamp="2026-01-26 20:03:59 +0000 UTC" firstStartedPulling="2026-01-26 20:04:00.017893741 +0000 UTC m=+8408.725029874" lastFinishedPulling="2026-01-26 20:04:00.499495906 +0000 UTC m=+8409.206632039" observedRunningTime="2026-01-26 20:04:00.955096934 +0000 UTC m=+8409.662233057" watchObservedRunningTime="2026-01-26 20:04:00.958123578 +0000 UTC m=+8409.665259711" Jan 26 20:04:03 crc kubenswrapper[4787]: I0126 20:04:03.592716 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:04:03 crc kubenswrapper[4787]: E0126 20:04:03.593455 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:04:17 crc kubenswrapper[4787]: I0126 20:04:17.590201 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:04:17 crc kubenswrapper[4787]: E0126 20:04:17.591404 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:04:29 crc kubenswrapper[4787]: I0126 20:04:29.589564 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:04:29 crc kubenswrapper[4787]: E0126 20:04:29.590611 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:04:43 crc kubenswrapper[4787]: I0126 20:04:43.590183 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:04:43 crc kubenswrapper[4787]: E0126 20:04:43.591380 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:04:54 crc kubenswrapper[4787]: I0126 20:04:54.590914 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:04:55 crc kubenswrapper[4787]: I0126 20:04:55.610177 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"1f6043ca920f870f78f18f8fcbd8dba77af8b649dcc965e5893af257769da443"} Jan 26 20:05:20 crc kubenswrapper[4787]: I0126 20:05:20.956253 4787 generic.go:334] "Generic (PLEG): container finished" podID="69b66133-0df1-4b99-b902-4197ea7b9bb7" containerID="86a87aed3d796c799c084112c5fb10c69eb3b27b6c6d2f145a24b6f080741539" exitCode=0 Jan 26 20:05:20 crc kubenswrapper[4787]: I0126 20:05:20.956355 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" event={"ID":"69b66133-0df1-4b99-b902-4197ea7b9bb7","Type":"ContainerDied","Data":"86a87aed3d796c799c084112c5fb10c69eb3b27b6c6d2f145a24b6f080741539"} Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.465396 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.648855 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-inventory\") pod \"69b66133-0df1-4b99-b902-4197ea7b9bb7\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.649109 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhwh7\" (UniqueName: \"kubernetes.io/projected/69b66133-0df1-4b99-b902-4197ea7b9bb7-kube-api-access-rhwh7\") pod \"69b66133-0df1-4b99-b902-4197ea7b9bb7\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.649366 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-agent-neutron-config-0\") pod \"69b66133-0df1-4b99-b902-4197ea7b9bb7\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.649452 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ssh-key-openstack-cell1\") pod \"69b66133-0df1-4b99-b902-4197ea7b9bb7\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.649536 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-combined-ca-bundle\") pod \"69b66133-0df1-4b99-b902-4197ea7b9bb7\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.649629 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ceph\") pod \"69b66133-0df1-4b99-b902-4197ea7b9bb7\" (UID: \"69b66133-0df1-4b99-b902-4197ea7b9bb7\") " Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.655061 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ceph" (OuterVolumeSpecName: "ceph") pod "69b66133-0df1-4b99-b902-4197ea7b9bb7" (UID: "69b66133-0df1-4b99-b902-4197ea7b9bb7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.655238 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "69b66133-0df1-4b99-b902-4197ea7b9bb7" (UID: "69b66133-0df1-4b99-b902-4197ea7b9bb7"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.656548 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b66133-0df1-4b99-b902-4197ea7b9bb7-kube-api-access-rhwh7" (OuterVolumeSpecName: "kube-api-access-rhwh7") pod "69b66133-0df1-4b99-b902-4197ea7b9bb7" (UID: "69b66133-0df1-4b99-b902-4197ea7b9bb7"). InnerVolumeSpecName "kube-api-access-rhwh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.691881 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "69b66133-0df1-4b99-b902-4197ea7b9bb7" (UID: "69b66133-0df1-4b99-b902-4197ea7b9bb7"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.697971 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "69b66133-0df1-4b99-b902-4197ea7b9bb7" (UID: "69b66133-0df1-4b99-b902-4197ea7b9bb7"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.713806 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-inventory" (OuterVolumeSpecName: "inventory") pod "69b66133-0df1-4b99-b902-4197ea7b9bb7" (UID: "69b66133-0df1-4b99-b902-4197ea7b9bb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.752149 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.752183 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhwh7\" (UniqueName: \"kubernetes.io/projected/69b66133-0df1-4b99-b902-4197ea7b9bb7-kube-api-access-rhwh7\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.752196 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.752204 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.752213 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.752221 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/69b66133-0df1-4b99-b902-4197ea7b9bb7-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.990545 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" event={"ID":"69b66133-0df1-4b99-b902-4197ea7b9bb7","Type":"ContainerDied","Data":"d5f0f5b4ec7126d8e6d0a24501d50d23fcc0a809ff510eeda27bf474c22076eb"} Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.990612 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-zbwbm" Jan 26 20:05:22 crc kubenswrapper[4787]: I0126 20:05:22.990626 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f0f5b4ec7126d8e6d0a24501d50d23fcc0a809ff510eeda27bf474c22076eb" Jan 26 20:05:50 crc kubenswrapper[4787]: I0126 20:05:50.651596 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 20:05:50 crc kubenswrapper[4787]: I0126 20:05:50.652371 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e2031eb5-98c0-4b04-baa1-3e7392198341" containerName="nova-cell0-conductor-conductor" containerID="cri-o://f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007" gracePeriod=30 Jan 26 20:05:50 crc kubenswrapper[4787]: I0126 20:05:50.671305 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 20:05:50 crc kubenswrapper[4787]: I0126 20:05:50.671539 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" containerName="nova-cell1-conductor-conductor" containerID="cri-o://54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d" gracePeriod=30 Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.518087 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.518725 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-log" containerID="cri-o://28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19" gracePeriod=30 Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.518848 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-api" containerID="cri-o://f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3" gracePeriod=30 Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.558944 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.559228 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" containerName="nova-scheduler-scheduler" containerID="cri-o://829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" gracePeriod=30 Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.572593 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.573268 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-metadata" containerID="cri-o://aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7" gracePeriod=30 Jan 26 20:05:51 crc kubenswrapper[4787]: I0126 20:05:51.572927 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-log" containerID="cri-o://7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca" gracePeriod=30 Jan 26 20:05:52 crc kubenswrapper[4787]: I0126 20:05:52.342566 4787 generic.go:334] "Generic (PLEG): container finished" podID="6ba7dad7-8960-487f-8626-a73b43620632" containerID="28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19" exitCode=143 Jan 26 20:05:52 crc kubenswrapper[4787]: I0126 20:05:52.342641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ba7dad7-8960-487f-8626-a73b43620632","Type":"ContainerDied","Data":"28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19"} Jan 26 20:05:52 crc kubenswrapper[4787]: I0126 20:05:52.345022 4787 generic.go:334] "Generic (PLEG): container finished" podID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerID="7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca" exitCode=143 Jan 26 20:05:52 crc kubenswrapper[4787]: I0126 20:05:52.345049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4","Type":"ContainerDied","Data":"7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca"} Jan 26 20:05:52 crc kubenswrapper[4787]: E0126 20:05:52.620099 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 20:05:52 crc kubenswrapper[4787]: E0126 20:05:52.621931 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 20:05:52 crc kubenswrapper[4787]: E0126 20:05:52.623399 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 26 20:05:52 crc kubenswrapper[4787]: E0126 20:05:52.623446 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e2031eb5-98c0-4b04-baa1-3e7392198341" containerName="nova-cell0-conductor-conductor" Jan 26 20:05:52 crc kubenswrapper[4787]: I0126 20:05:52.982311 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.098205 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-config-data\") pod \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.098670 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-combined-ca-bundle\") pod \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.098872 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62kpr\" (UniqueName: \"kubernetes.io/projected/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-kube-api-access-62kpr\") pod \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\" (UID: \"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1\") " Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.110976 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-kube-api-access-62kpr" (OuterVolumeSpecName: "kube-api-access-62kpr") pod "1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" (UID: "1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1"). InnerVolumeSpecName "kube-api-access-62kpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.131677 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-config-data" (OuterVolumeSpecName: "config-data") pod "1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" (UID: "1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.143198 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" (UID: "1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.201337 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.201365 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.201376 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62kpr\" (UniqueName: \"kubernetes.io/projected/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1-kube-api-access-62kpr\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.355726 4787 generic.go:334] "Generic (PLEG): container finished" podID="1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" containerID="54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d" exitCode=0 Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.355777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1","Type":"ContainerDied","Data":"54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d"} Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.355804 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1","Type":"ContainerDied","Data":"fb0a4ba19848ec78cba06451d5ab5bc326417ebccc9cb2c696fa5ee4b7d6138b"} Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.355822 4787 scope.go:117] "RemoveContainer" containerID="54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.355931 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.395350 4787 scope.go:117] "RemoveContainer" containerID="54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d" Jan 26 20:05:53 crc kubenswrapper[4787]: E0126 20:05:53.395908 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d\": container with ID starting with 54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d not found: ID does not exist" containerID="54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.395977 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d"} err="failed to get container status \"54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d\": rpc error: code = NotFound desc = could not find container \"54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d\": container with ID starting with 54986671ffc70ad291d3431bc8422c5c53ac573bbadb920abd51574d39e50c6d not found: ID does not exist" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.402021 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.424302 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.438056 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 20:05:53 crc kubenswrapper[4787]: E0126 20:05:53.438550 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" containerName="nova-cell1-conductor-conductor" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.438568 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" containerName="nova-cell1-conductor-conductor" Jan 26 20:05:53 crc kubenswrapper[4787]: E0126 20:05:53.438590 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b66133-0df1-4b99-b902-4197ea7b9bb7" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.438597 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b66133-0df1-4b99-b902-4197ea7b9bb7" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.438787 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" containerName="nova-cell1-conductor-conductor" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.438803 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b66133-0df1-4b99-b902-4197ea7b9bb7" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.439540 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.444555 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.448790 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.600471 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1" path="/var/lib/kubelet/pods/1f27b8fd-ca02-4e6a-8bd6-601ef4cc09e1/volumes" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.611496 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafda68-5f85-4fbd-920d-372205b93018-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.611767 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px8wx\" (UniqueName: \"kubernetes.io/projected/6fafda68-5f85-4fbd-920d-372205b93018-kube-api-access-px8wx\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.611824 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafda68-5f85-4fbd-920d-372205b93018-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.713682 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px8wx\" (UniqueName: \"kubernetes.io/projected/6fafda68-5f85-4fbd-920d-372205b93018-kube-api-access-px8wx\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.713739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafda68-5f85-4fbd-920d-372205b93018-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.713807 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafda68-5f85-4fbd-920d-372205b93018-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.718683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafda68-5f85-4fbd-920d-372205b93018-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.720698 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafda68-5f85-4fbd-920d-372205b93018-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.738491 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px8wx\" (UniqueName: \"kubernetes.io/projected/6fafda68-5f85-4fbd-920d-372205b93018-kube-api-access-px8wx\") pod \"nova-cell1-conductor-0\" (UID: \"6fafda68-5f85-4fbd-920d-372205b93018\") " pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:53 crc kubenswrapper[4787]: I0126 20:05:53.756313 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:54 crc kubenswrapper[4787]: I0126 20:05:54.271134 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 20:05:54 crc kubenswrapper[4787]: I0126 20:05:54.377982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6fafda68-5f85-4fbd-920d-372205b93018","Type":"ContainerStarted","Data":"5a6ab26a2f23a909d45edb8a6eff1718fe7ce91a08fab843c068e4c6c8bb8d33"} Jan 26 20:05:54 crc kubenswrapper[4787]: E0126 20:05:54.671217 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 20:05:54 crc kubenswrapper[4787]: E0126 20:05:54.672751 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 20:05:54 crc kubenswrapper[4787]: E0126 20:05:54.674216 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 20:05:54 crc kubenswrapper[4787]: E0126 20:05:54.674258 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" containerName="nova-scheduler-scheduler" Jan 26 20:05:54 crc kubenswrapper[4787]: I0126 20:05:54.751134 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": read tcp 10.217.0.2:53118->10.217.1.78:8775: read: connection reset by peer" Jan 26 20:05:54 crc kubenswrapper[4787]: I0126 20:05:54.751272 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": read tcp 10.217.0.2:53132->10.217.1.78:8775: read: connection reset by peer" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.333866 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.402229 4787 generic.go:334] "Generic (PLEG): container finished" podID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerID="aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7" exitCode=0 Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.402294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4","Type":"ContainerDied","Data":"aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7"} Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.402320 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4","Type":"ContainerDied","Data":"b1de05c37e8889d688ca53ac3f0ea906f84931338b38e106588c18cbdd7c0737"} Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.402336 4787 scope.go:117] "RemoveContainer" containerID="aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.402432 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.408157 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.411421 4787 generic.go:334] "Generic (PLEG): container finished" podID="6ba7dad7-8960-487f-8626-a73b43620632" containerID="f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3" exitCode=0 Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.411464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ba7dad7-8960-487f-8626-a73b43620632","Type":"ContainerDied","Data":"f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3"} Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.411483 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ba7dad7-8960-487f-8626-a73b43620632","Type":"ContainerDied","Data":"8280c0832e8de5feef50b2c22c069d52ca7d23ec60a0dd7fdefb0290b08a9591"} Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.413982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6fafda68-5f85-4fbd-920d-372205b93018","Type":"ContainerStarted","Data":"4b9e8906e6d26e28f627a6326679644a105e33f363046cc8ec9b0b2b30801f41"} Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.426074 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.452418 4787 scope.go:117] "RemoveContainer" containerID="7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.463413 4787 generic.go:334] "Generic (PLEG): container finished" podID="e2031eb5-98c0-4b04-baa1-3e7392198341" containerID="f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007" exitCode=0 Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.465062 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e2031eb5-98c0-4b04-baa1-3e7392198341","Type":"ContainerDied","Data":"f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007"} Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.466005 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-config-data\") pod \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.477343 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-config-data\") pod \"6ba7dad7-8960-487f-8626-a73b43620632\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.477458 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-combined-ca-bundle\") pod \"6ba7dad7-8960-487f-8626-a73b43620632\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.477484 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggwjv\" (UniqueName: \"kubernetes.io/projected/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-kube-api-access-ggwjv\") pod \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.475491 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.4754687779999998 podStartE2EDuration="2.475468778s" podCreationTimestamp="2026-01-26 20:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:05:55.472423873 +0000 UTC m=+8524.179560006" watchObservedRunningTime="2026-01-26 20:05:55.475468778 +0000 UTC m=+8524.182604911" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.477562 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-combined-ca-bundle\") pod \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.478589 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba7dad7-8960-487f-8626-a73b43620632-logs\") pod \"6ba7dad7-8960-487f-8626-a73b43620632\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.478679 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-logs\") pod \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\" (UID: \"621bfbac-f962-44a9-b7dc-0ede7bd2aaa4\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.480870 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba7dad7-8960-487f-8626-a73b43620632-logs" (OuterVolumeSpecName: "logs") pod "6ba7dad7-8960-487f-8626-a73b43620632" (UID: "6ba7dad7-8960-487f-8626-a73b43620632"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.481523 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-logs" (OuterVolumeSpecName: "logs") pod "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" (UID: "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.486181 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-kube-api-access-ggwjv" (OuterVolumeSpecName: "kube-api-access-ggwjv") pod "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" (UID: "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4"). InnerVolumeSpecName "kube-api-access-ggwjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.505454 4787 scope.go:117] "RemoveContainer" containerID="aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.509488 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7\": container with ID starting with aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7 not found: ID does not exist" containerID="aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.509542 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7"} err="failed to get container status \"aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7\": rpc error: code = NotFound desc = could not find container \"aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7\": container with ID starting with aca7d34f211d9b4ea24047239edc2c0a04a2404aa15ef1f825b5c671df86b3e7 not found: ID does not exist" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.509566 4787 scope.go:117] "RemoveContainer" containerID="7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.510961 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca\": container with ID starting with 7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca not found: ID does not exist" containerID="7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.510990 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca"} err="failed to get container status \"7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca\": rpc error: code = NotFound desc = could not find container \"7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca\": container with ID starting with 7a03c1dce4c286eb5366605071a10977df24a7bbb8b97b544857f929710ee4ca not found: ID does not exist" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.511007 4787 scope.go:117] "RemoveContainer" containerID="f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.519904 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-config-data" (OuterVolumeSpecName: "config-data") pod "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" (UID: "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.525119 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" (UID: "621bfbac-f962-44a9-b7dc-0ede7bd2aaa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.536888 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ba7dad7-8960-487f-8626-a73b43620632" (UID: "6ba7dad7-8960-487f-8626-a73b43620632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.538172 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.545546 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-config-data" (OuterVolumeSpecName: "config-data") pod "6ba7dad7-8960-487f-8626-a73b43620632" (UID: "6ba7dad7-8960-487f-8626-a73b43620632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.561269 4787 scope.go:117] "RemoveContainer" containerID="28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.583721 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-config-data\") pod \"e2031eb5-98c0-4b04-baa1-3e7392198341\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.583874 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-combined-ca-bundle\") pod \"e2031eb5-98c0-4b04-baa1-3e7392198341\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.584034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv9n4\" (UniqueName: \"kubernetes.io/projected/6ba7dad7-8960-487f-8626-a73b43620632-kube-api-access-sv9n4\") pod \"6ba7dad7-8960-487f-8626-a73b43620632\" (UID: \"6ba7dad7-8960-487f-8626-a73b43620632\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.584181 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhlmw\" (UniqueName: \"kubernetes.io/projected/e2031eb5-98c0-4b04-baa1-3e7392198341-kube-api-access-fhlmw\") pod \"e2031eb5-98c0-4b04-baa1-3e7392198341\" (UID: \"e2031eb5-98c0-4b04-baa1-3e7392198341\") " Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585821 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585844 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585853 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba7dad7-8960-487f-8626-a73b43620632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585862 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggwjv\" (UniqueName: \"kubernetes.io/projected/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-kube-api-access-ggwjv\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585871 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585879 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba7dad7-8960-487f-8626-a73b43620632-logs\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.585887 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.587522 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2031eb5-98c0-4b04-baa1-3e7392198341-kube-api-access-fhlmw" (OuterVolumeSpecName: "kube-api-access-fhlmw") pod "e2031eb5-98c0-4b04-baa1-3e7392198341" (UID: "e2031eb5-98c0-4b04-baa1-3e7392198341"). InnerVolumeSpecName "kube-api-access-fhlmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.588292 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba7dad7-8960-487f-8626-a73b43620632-kube-api-access-sv9n4" (OuterVolumeSpecName: "kube-api-access-sv9n4") pod "6ba7dad7-8960-487f-8626-a73b43620632" (UID: "6ba7dad7-8960-487f-8626-a73b43620632"). InnerVolumeSpecName "kube-api-access-sv9n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.590721 4787 scope.go:117] "RemoveContainer" containerID="f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.591709 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3\": container with ID starting with f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3 not found: ID does not exist" containerID="f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.591745 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3"} err="failed to get container status \"f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3\": rpc error: code = NotFound desc = could not find container \"f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3\": container with ID starting with f5865ae696fb5e387928eab7f651eaa407267331186e6733b21060c2cc7b45c3 not found: ID does not exist" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.591765 4787 scope.go:117] "RemoveContainer" containerID="28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.592016 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19\": container with ID starting with 28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19 not found: ID does not exist" containerID="28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.592049 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19"} err="failed to get container status \"28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19\": rpc error: code = NotFound desc = could not find container \"28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19\": container with ID starting with 28b9851a0c5af10a096a0eea4938fa9b05062aafc717da6eaa896a850178de19 not found: ID does not exist" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.617112 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2031eb5-98c0-4b04-baa1-3e7392198341" (UID: "e2031eb5-98c0-4b04-baa1-3e7392198341"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.621583 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-config-data" (OuterVolumeSpecName: "config-data") pod "e2031eb5-98c0-4b04-baa1-3e7392198341" (UID: "e2031eb5-98c0-4b04-baa1-3e7392198341"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.688228 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv9n4\" (UniqueName: \"kubernetes.io/projected/6ba7dad7-8960-487f-8626-a73b43620632-kube-api-access-sv9n4\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.688269 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhlmw\" (UniqueName: \"kubernetes.io/projected/e2031eb5-98c0-4b04-baa1-3e7392198341-kube-api-access-fhlmw\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.688280 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.688291 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2031eb5-98c0-4b04-baa1-3e7392198341-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.732773 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.755104 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.765691 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.766268 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-log" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766290 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-log" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.766302 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-log" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766310 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-log" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.766322 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2031eb5-98c0-4b04-baa1-3e7392198341" containerName="nova-cell0-conductor-conductor" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766329 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2031eb5-98c0-4b04-baa1-3e7392198341" containerName="nova-cell0-conductor-conductor" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.766349 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-api" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766356 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-api" Jan 26 20:05:55 crc kubenswrapper[4787]: E0126 20:05:55.766371 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-metadata" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766378 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-metadata" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766600 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2031eb5-98c0-4b04-baa1-3e7392198341" containerName="nova-cell0-conductor-conductor" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766634 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-api" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766643 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba7dad7-8960-487f-8626-a73b43620632" containerName="nova-api-log" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766655 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-metadata" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.766666 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" containerName="nova-metadata-log" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.769922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.773143 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.788405 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.789708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mxp\" (UniqueName: \"kubernetes.io/projected/a1647dea-71d2-484b-89db-3d610a57e3fc-kube-api-access-z5mxp\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.789876 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1647dea-71d2-484b-89db-3d610a57e3fc-logs\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.789916 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1647dea-71d2-484b-89db-3d610a57e3fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.789940 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1647dea-71d2-484b-89db-3d610a57e3fc-config-data\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.891242 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mxp\" (UniqueName: \"kubernetes.io/projected/a1647dea-71d2-484b-89db-3d610a57e3fc-kube-api-access-z5mxp\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.891331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1647dea-71d2-484b-89db-3d610a57e3fc-logs\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.891361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1647dea-71d2-484b-89db-3d610a57e3fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.891377 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1647dea-71d2-484b-89db-3d610a57e3fc-config-data\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.891810 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1647dea-71d2-484b-89db-3d610a57e3fc-logs\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.895341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1647dea-71d2-484b-89db-3d610a57e3fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.895526 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1647dea-71d2-484b-89db-3d610a57e3fc-config-data\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:55 crc kubenswrapper[4787]: I0126 20:05:55.909141 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mxp\" (UniqueName: \"kubernetes.io/projected/a1647dea-71d2-484b-89db-3d610a57e3fc-kube-api-access-z5mxp\") pod \"nova-metadata-0\" (UID: \"a1647dea-71d2-484b-89db-3d610a57e3fc\") " pod="openstack/nova-metadata-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.096204 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.141604 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.146723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.151735 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.151920 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dvntg" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.152026 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.152092 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.152302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.152897 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.153117 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.163529 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.197855 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198250 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198298 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198358 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstpn\" (UniqueName: \"kubernetes.io/projected/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-kube-api-access-vstpn\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198446 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198522 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198595 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198631 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198658 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.198689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.301834 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.301899 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vstpn\" (UniqueName: \"kubernetes.io/projected/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-kube-api-access-vstpn\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.301982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302037 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302085 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302112 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302163 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302202 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.302223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.303070 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.306040 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.311448 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.311990 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.315212 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.316586 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.331540 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.335492 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.335986 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vstpn\" (UniqueName: \"kubernetes.io/projected/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-kube-api-access-vstpn\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.341185 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.363440 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.476285 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.479101 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e2031eb5-98c0-4b04-baa1-3e7392198341","Type":"ContainerDied","Data":"d0e1e76439bfb81929a1adc9cd2d7ec65a3d2bfc46bb88872bb53dcba02c0610"} Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.479182 4787 scope.go:117] "RemoveContainer" containerID="f4e5df9b9ee303c518fa932f79eceb8e3e30a2d19801376fb2aa4e2f1e1cc007" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.479121 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.511834 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.525136 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.532232 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.542858 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.544571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.546863 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.555077 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.581281 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.592741 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.594166 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.596587 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.613850 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.623384 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxp2\" (UniqueName: \"kubernetes.io/projected/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-kube-api-access-mdxp2\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.623443 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-logs\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.623488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.623643 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.626325 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zkh\" (UniqueName: \"kubernetes.io/projected/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-kube-api-access-85zkh\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.626380 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.626412 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-config-data\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.643994 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.665023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728283 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728356 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zkh\" (UniqueName: \"kubernetes.io/projected/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-kube-api-access-85zkh\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728395 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-config-data\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728471 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxp2\" (UniqueName: \"kubernetes.io/projected/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-kube-api-access-mdxp2\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728507 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-logs\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.728557 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.730764 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-logs\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.734488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-config-data\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.735353 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.735445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.736176 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.750675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxp2\" (UniqueName: \"kubernetes.io/projected/c87ddc92-6046-48e8-91a9-5bfd2bc991c5-kube-api-access-mdxp2\") pod \"nova-api-0\" (UID: \"c87ddc92-6046-48e8-91a9-5bfd2bc991c5\") " pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.750735 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zkh\" (UniqueName: \"kubernetes.io/projected/3c3fab93-c7b9-466b-a961-7b1f04d5e0e0-kube-api-access-85zkh\") pod \"nova-cell0-conductor-0\" (UID: \"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0\") " pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.868007 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 20:05:56 crc kubenswrapper[4787]: I0126 20:05:56.927101 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.185739 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s"] Jan 26 20:05:57 crc kubenswrapper[4787]: W0126 20:05:57.190003 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f02fe9_4c51_4bfb_8b84_a673b23ad0ca.slice/crio-28fb7dc11cf03dc18518754bff3563b160cddd2dc43a6e32966c3d25ec81e84a WatchSource:0}: Error finding container 28fb7dc11cf03dc18518754bff3563b160cddd2dc43a6e32966c3d25ec81e84a: Status 404 returned error can't find the container with id 28fb7dc11cf03dc18518754bff3563b160cddd2dc43a6e32966c3d25ec81e84a Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.383495 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.508713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" event={"ID":"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca","Type":"ContainerStarted","Data":"28fb7dc11cf03dc18518754bff3563b160cddd2dc43a6e32966c3d25ec81e84a"} Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.511723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c87ddc92-6046-48e8-91a9-5bfd2bc991c5","Type":"ContainerStarted","Data":"8671ce08b8f2ee50d452544cd32391ba588d2698d76a37ea63d2891acaf1bffd"} Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.516782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1647dea-71d2-484b-89db-3d610a57e3fc","Type":"ContainerStarted","Data":"dcede8b1b6f51a6dddd3221bf33997058dddab614b77f6bd33b61ecc16c5c46a"} Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.518304 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1647dea-71d2-484b-89db-3d610a57e3fc","Type":"ContainerStarted","Data":"61a52032561f8de2559bf2a8767d1b41e8f948d17f73c67077d2a984dc110028"} Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.520858 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1647dea-71d2-484b-89db-3d610a57e3fc","Type":"ContainerStarted","Data":"a63a5657306359418268a54174b175f6bcbf9e3da93d367e1854556a19f23242"} Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.540066 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.570273 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.570257681 podStartE2EDuration="2.570257681s" podCreationTimestamp="2026-01-26 20:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:05:57.55429665 +0000 UTC m=+8526.261432783" watchObservedRunningTime="2026-01-26 20:05:57.570257681 +0000 UTC m=+8526.277393814" Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.637306 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621bfbac-f962-44a9-b7dc-0ede7bd2aaa4" path="/var/lib/kubelet/pods/621bfbac-f962-44a9-b7dc-0ede7bd2aaa4/volumes" Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.638010 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba7dad7-8960-487f-8626-a73b43620632" path="/var/lib/kubelet/pods/6ba7dad7-8960-487f-8626-a73b43620632/volumes" Jan 26 20:05:57 crc kubenswrapper[4787]: I0126 20:05:57.640038 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2031eb5-98c0-4b04-baa1-3e7392198341" path="/var/lib/kubelet/pods/e2031eb5-98c0-4b04-baa1-3e7392198341/volumes" Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.526121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0","Type":"ContainerStarted","Data":"43d3869d243a1b29d3827b877797eae2ed9b2e73a90d59f27f9efc124830216c"} Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.526567 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c3fab93-c7b9-466b-a961-7b1f04d5e0e0","Type":"ContainerStarted","Data":"ec9b3d9f49250605441fffbfc464ff49fdd7ccdf1157920501d34360b5b75493"} Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.526771 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.529714 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" event={"ID":"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca","Type":"ContainerStarted","Data":"0c2af475f82595d8b06deed9972c8c476fdd35b9f81dca40af2236dd2683d3b1"} Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.532960 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c87ddc92-6046-48e8-91a9-5bfd2bc991c5","Type":"ContainerStarted","Data":"60af4d98734253c8a4e1ba5f3a56fb49291c905d41978d522538c447f2f12247"} Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.533042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c87ddc92-6046-48e8-91a9-5bfd2bc991c5","Type":"ContainerStarted","Data":"bf80551a1d13f71d25b268e590cf7574bd292ef7603ac692ba34df277f2d2edb"} Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.564672 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.564649128 podStartE2EDuration="2.564649128s" podCreationTimestamp="2026-01-26 20:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:05:58.544032022 +0000 UTC m=+8527.251168155" watchObservedRunningTime="2026-01-26 20:05:58.564649128 +0000 UTC m=+8527.271785261" Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.582361 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" podStartSLOduration=2.021502411 podStartE2EDuration="2.582338031s" podCreationTimestamp="2026-01-26 20:05:56 +0000 UTC" firstStartedPulling="2026-01-26 20:05:57.19604018 +0000 UTC m=+8525.903176313" lastFinishedPulling="2026-01-26 20:05:57.7568758 +0000 UTC m=+8526.464011933" observedRunningTime="2026-01-26 20:05:58.57129582 +0000 UTC m=+8527.278431983" watchObservedRunningTime="2026-01-26 20:05:58.582338031 +0000 UTC m=+8527.289474174" Jan 26 20:05:58 crc kubenswrapper[4787]: I0126 20:05:58.610689 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.610668856 podStartE2EDuration="2.610668856s" podCreationTimestamp="2026-01-26 20:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:05:58.606292809 +0000 UTC m=+8527.313428982" watchObservedRunningTime="2026-01-26 20:05:58.610668856 +0000 UTC m=+8527.317804999" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.255129 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.286813 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-combined-ca-bundle\") pod \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.286942 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt42d\" (UniqueName: \"kubernetes.io/projected/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-kube-api-access-rt42d\") pod \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.287039 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-config-data\") pod \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\" (UID: \"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe\") " Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.297250 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-kube-api-access-rt42d" (OuterVolumeSpecName: "kube-api-access-rt42d") pod "d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" (UID: "d70355a6-3daa-42e6-8b3d-7bedf5d20cfe"). InnerVolumeSpecName "kube-api-access-rt42d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.330469 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-config-data" (OuterVolumeSpecName: "config-data") pod "d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" (UID: "d70355a6-3daa-42e6-8b3d-7bedf5d20cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.337211 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" (UID: "d70355a6-3daa-42e6-8b3d-7bedf5d20cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.390258 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.390310 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt42d\" (UniqueName: \"kubernetes.io/projected/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-kube-api-access-rt42d\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.390333 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.549381 4787 generic.go:334] "Generic (PLEG): container finished" podID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" exitCode=0 Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.549470 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.549458 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe","Type":"ContainerDied","Data":"829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65"} Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.549562 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d70355a6-3daa-42e6-8b3d-7bedf5d20cfe","Type":"ContainerDied","Data":"ba3490a6be720a6512190cdb4b5535877476701c5a0cd19e6bb5377bb79122e1"} Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.549602 4787 scope.go:117] "RemoveContainer" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.616920 4787 scope.go:117] "RemoveContainer" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.619054 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 20:05:59 crc kubenswrapper[4787]: E0126 20:05:59.623391 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65\": container with ID starting with 829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65 not found: ID does not exist" containerID="829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.623452 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65"} err="failed to get container status \"829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65\": rpc error: code = NotFound desc = could not find container \"829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65\": container with ID starting with 829ab1b32829a5b4f771f4f08c49162e54c047ddcb9a1a626d3e328155d0bc65 not found: ID does not exist" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.630643 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.648414 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 20:05:59 crc kubenswrapper[4787]: E0126 20:05:59.648904 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" containerName="nova-scheduler-scheduler" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.648924 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" containerName="nova-scheduler-scheduler" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.649147 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" containerName="nova-scheduler-scheduler" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.649887 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.652346 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.658029 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.800665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrst\" (UniqueName: \"kubernetes.io/projected/4884b53e-5d30-42ca-96d8-d72088dbc449-kube-api-access-jfrst\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.800879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4884b53e-5d30-42ca-96d8-d72088dbc449-config-data\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.800925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4884b53e-5d30-42ca-96d8-d72088dbc449-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.902776 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4884b53e-5d30-42ca-96d8-d72088dbc449-config-data\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.903123 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4884b53e-5d30-42ca-96d8-d72088dbc449-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.903249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrst\" (UniqueName: \"kubernetes.io/projected/4884b53e-5d30-42ca-96d8-d72088dbc449-kube-api-access-jfrst\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.910330 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4884b53e-5d30-42ca-96d8-d72088dbc449-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.919809 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4884b53e-5d30-42ca-96d8-d72088dbc449-config-data\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.932569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrst\" (UniqueName: \"kubernetes.io/projected/4884b53e-5d30-42ca-96d8-d72088dbc449-kube-api-access-jfrst\") pod \"nova-scheduler-0\" (UID: \"4884b53e-5d30-42ca-96d8-d72088dbc449\") " pod="openstack/nova-scheduler-0" Jan 26 20:05:59 crc kubenswrapper[4787]: I0126 20:05:59.981395 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 20:06:00 crc kubenswrapper[4787]: I0126 20:06:00.474424 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 20:06:00 crc kubenswrapper[4787]: W0126 20:06:00.482072 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4884b53e_5d30_42ca_96d8_d72088dbc449.slice/crio-b4f86743e2c165b34ab5481792f044536b0caed6257471d26a98ff3120060f51 WatchSource:0}: Error finding container b4f86743e2c165b34ab5481792f044536b0caed6257471d26a98ff3120060f51: Status 404 returned error can't find the container with id b4f86743e2c165b34ab5481792f044536b0caed6257471d26a98ff3120060f51 Jan 26 20:06:00 crc kubenswrapper[4787]: I0126 20:06:00.563795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4884b53e-5d30-42ca-96d8-d72088dbc449","Type":"ContainerStarted","Data":"b4f86743e2c165b34ab5481792f044536b0caed6257471d26a98ff3120060f51"} Jan 26 20:06:01 crc kubenswrapper[4787]: I0126 20:06:01.096762 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 20:06:01 crc kubenswrapper[4787]: I0126 20:06:01.097161 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 20:06:01 crc kubenswrapper[4787]: I0126 20:06:01.581795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4884b53e-5d30-42ca-96d8-d72088dbc449","Type":"ContainerStarted","Data":"7b70c610c17a70502f187d705d7bb1d62cce6e29f9f688970c93223e63aa1652"} Jan 26 20:06:01 crc kubenswrapper[4787]: I0126 20:06:01.615818 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.615794134 podStartE2EDuration="2.615794134s" podCreationTimestamp="2026-01-26 20:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:06:01.601297258 +0000 UTC m=+8530.308433391" watchObservedRunningTime="2026-01-26 20:06:01.615794134 +0000 UTC m=+8530.322930277" Jan 26 20:06:01 crc kubenswrapper[4787]: I0126 20:06:01.619514 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70355a6-3daa-42e6-8b3d-7bedf5d20cfe" path="/var/lib/kubelet/pods/d70355a6-3daa-42e6-8b3d-7bedf5d20cfe/volumes" Jan 26 20:06:03 crc kubenswrapper[4787]: I0126 20:06:03.783756 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 20:06:04 crc kubenswrapper[4787]: I0126 20:06:04.982245 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 20:06:06 crc kubenswrapper[4787]: I0126 20:06:06.096854 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 20:06:06 crc kubenswrapper[4787]: I0126 20:06:06.097366 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 20:06:06 crc kubenswrapper[4787]: I0126 20:06:06.868902 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 20:06:06 crc kubenswrapper[4787]: I0126 20:06:06.869477 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 20:06:06 crc kubenswrapper[4787]: I0126 20:06:06.987631 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 20:06:07 crc kubenswrapper[4787]: I0126 20:06:07.138282 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1647dea-71d2-484b-89db-3d610a57e3fc" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.181:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 20:06:07 crc kubenswrapper[4787]: I0126 20:06:07.180170 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1647dea-71d2-484b-89db-3d610a57e3fc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.181:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 20:06:07 crc kubenswrapper[4787]: I0126 20:06:07.952150 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c87ddc92-6046-48e8-91a9-5bfd2bc991c5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 20:06:07 crc kubenswrapper[4787]: I0126 20:06:07.952870 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c87ddc92-6046-48e8-91a9-5bfd2bc991c5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 20:06:09 crc kubenswrapper[4787]: I0126 20:06:09.982483 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 20:06:10 crc kubenswrapper[4787]: I0126 20:06:10.020193 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 20:06:10 crc kubenswrapper[4787]: I0126 20:06:10.724416 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.100744 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.101610 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.104099 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.105030 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.874020 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.874090 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.874528 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.874576 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.880278 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 20:06:16 crc kubenswrapper[4787]: I0126 20:06:16.882574 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 20:07:16 crc kubenswrapper[4787]: I0126 20:07:16.807659 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:07:16 crc kubenswrapper[4787]: I0126 20:07:16.808593 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.683492 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qf69"] Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.688370 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.696368 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qf69"] Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.798387 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hjp5\" (UniqueName: \"kubernetes.io/projected/12141ef3-dfe5-456a-b93a-921bccfa11cb-kube-api-access-5hjp5\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.798489 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-catalog-content\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.798668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-utilities\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.901468 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-utilities\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.901607 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hjp5\" (UniqueName: \"kubernetes.io/projected/12141ef3-dfe5-456a-b93a-921bccfa11cb-kube-api-access-5hjp5\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.901653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-catalog-content\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.902161 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-utilities\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.902199 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-catalog-content\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:25 crc kubenswrapper[4787]: I0126 20:07:25.926541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hjp5\" (UniqueName: \"kubernetes.io/projected/12141ef3-dfe5-456a-b93a-921bccfa11cb-kube-api-access-5hjp5\") pod \"redhat-operators-6qf69\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:26 crc kubenswrapper[4787]: I0126 20:07:26.039629 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:26 crc kubenswrapper[4787]: I0126 20:07:26.530377 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qf69"] Jan 26 20:07:26 crc kubenswrapper[4787]: W0126 20:07:26.532793 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12141ef3_dfe5_456a_b93a_921bccfa11cb.slice/crio-960f777a0f42e806950fde4b7d1c8058804fb2eabbf2751e70b14b981d1849a6 WatchSource:0}: Error finding container 960f777a0f42e806950fde4b7d1c8058804fb2eabbf2751e70b14b981d1849a6: Status 404 returned error can't find the container with id 960f777a0f42e806950fde4b7d1c8058804fb2eabbf2751e70b14b981d1849a6 Jan 26 20:07:26 crc kubenswrapper[4787]: I0126 20:07:26.719698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerStarted","Data":"960f777a0f42e806950fde4b7d1c8058804fb2eabbf2751e70b14b981d1849a6"} Jan 26 20:07:27 crc kubenswrapper[4787]: I0126 20:07:27.731680 4787 generic.go:334] "Generic (PLEG): container finished" podID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerID="ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b" exitCode=0 Jan 26 20:07:27 crc kubenswrapper[4787]: I0126 20:07:27.731753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerDied","Data":"ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b"} Jan 26 20:07:28 crc kubenswrapper[4787]: I0126 20:07:28.748825 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerStarted","Data":"404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee"} Jan 26 20:07:31 crc kubenswrapper[4787]: I0126 20:07:31.779572 4787 generic.go:334] "Generic (PLEG): container finished" podID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerID="404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee" exitCode=0 Jan 26 20:07:31 crc kubenswrapper[4787]: I0126 20:07:31.779708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerDied","Data":"404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee"} Jan 26 20:07:32 crc kubenswrapper[4787]: I0126 20:07:32.795779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerStarted","Data":"2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b"} Jan 26 20:07:32 crc kubenswrapper[4787]: I0126 20:07:32.819294 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qf69" podStartSLOduration=3.290462137 podStartE2EDuration="7.819271485s" podCreationTimestamp="2026-01-26 20:07:25 +0000 UTC" firstStartedPulling="2026-01-26 20:07:27.734162048 +0000 UTC m=+8616.441298181" lastFinishedPulling="2026-01-26 20:07:32.262971356 +0000 UTC m=+8620.970107529" observedRunningTime="2026-01-26 20:07:32.814334563 +0000 UTC m=+8621.521470706" watchObservedRunningTime="2026-01-26 20:07:32.819271485 +0000 UTC m=+8621.526407638" Jan 26 20:07:36 crc kubenswrapper[4787]: I0126 20:07:36.042603 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:36 crc kubenswrapper[4787]: I0126 20:07:36.045289 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:37 crc kubenswrapper[4787]: I0126 20:07:37.099859 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6qf69" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="registry-server" probeResult="failure" output=< Jan 26 20:07:37 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 20:07:37 crc kubenswrapper[4787]: > Jan 26 20:07:46 crc kubenswrapper[4787]: I0126 20:07:46.110281 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:46 crc kubenswrapper[4787]: I0126 20:07:46.231190 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:46 crc kubenswrapper[4787]: I0126 20:07:46.354655 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qf69"] Jan 26 20:07:46 crc kubenswrapper[4787]: I0126 20:07:46.807701 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:07:46 crc kubenswrapper[4787]: I0126 20:07:46.807785 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.002362 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qf69" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="registry-server" containerID="cri-o://2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b" gracePeriod=2 Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.596805 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.696149 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-catalog-content\") pod \"12141ef3-dfe5-456a-b93a-921bccfa11cb\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.696221 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-utilities\") pod \"12141ef3-dfe5-456a-b93a-921bccfa11cb\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.696418 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hjp5\" (UniqueName: \"kubernetes.io/projected/12141ef3-dfe5-456a-b93a-921bccfa11cb-kube-api-access-5hjp5\") pod \"12141ef3-dfe5-456a-b93a-921bccfa11cb\" (UID: \"12141ef3-dfe5-456a-b93a-921bccfa11cb\") " Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.698093 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-utilities" (OuterVolumeSpecName: "utilities") pod "12141ef3-dfe5-456a-b93a-921bccfa11cb" (UID: "12141ef3-dfe5-456a-b93a-921bccfa11cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.704333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12141ef3-dfe5-456a-b93a-921bccfa11cb-kube-api-access-5hjp5" (OuterVolumeSpecName: "kube-api-access-5hjp5") pod "12141ef3-dfe5-456a-b93a-921bccfa11cb" (UID: "12141ef3-dfe5-456a-b93a-921bccfa11cb"). InnerVolumeSpecName "kube-api-access-5hjp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.799905 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.800331 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hjp5\" (UniqueName: \"kubernetes.io/projected/12141ef3-dfe5-456a-b93a-921bccfa11cb-kube-api-access-5hjp5\") on node \"crc\" DevicePath \"\"" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.853645 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12141ef3-dfe5-456a-b93a-921bccfa11cb" (UID: "12141ef3-dfe5-456a-b93a-921bccfa11cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:07:48 crc kubenswrapper[4787]: I0126 20:07:48.902543 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12141ef3-dfe5-456a-b93a-921bccfa11cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.016922 4787 generic.go:334] "Generic (PLEG): container finished" podID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerID="2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b" exitCode=0 Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.017048 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qf69" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.017019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerDied","Data":"2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b"} Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.017173 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qf69" event={"ID":"12141ef3-dfe5-456a-b93a-921bccfa11cb","Type":"ContainerDied","Data":"960f777a0f42e806950fde4b7d1c8058804fb2eabbf2751e70b14b981d1849a6"} Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.017192 4787 scope.go:117] "RemoveContainer" containerID="2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.047215 4787 scope.go:117] "RemoveContainer" containerID="404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.060812 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qf69"] Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.073588 4787 scope.go:117] "RemoveContainer" containerID="ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.073920 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qf69"] Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.121339 4787 scope.go:117] "RemoveContainer" containerID="2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b" Jan 26 20:07:49 crc kubenswrapper[4787]: E0126 20:07:49.121861 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b\": container with ID starting with 2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b not found: ID does not exist" containerID="2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.121939 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b"} err="failed to get container status \"2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b\": rpc error: code = NotFound desc = could not find container \"2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b\": container with ID starting with 2a9cb3b1889471e93e19f1356aa2412a97e5d45e58098142ec48567f59a4f11b not found: ID does not exist" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.122006 4787 scope.go:117] "RemoveContainer" containerID="404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee" Jan 26 20:07:49 crc kubenswrapper[4787]: E0126 20:07:49.122356 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee\": container with ID starting with 404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee not found: ID does not exist" containerID="404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.122391 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee"} err="failed to get container status \"404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee\": rpc error: code = NotFound desc = could not find container \"404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee\": container with ID starting with 404b0d1fd5a027611db25b54b87b73b2bdd21957b96f86b1b1810574e65760ee not found: ID does not exist" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.122415 4787 scope.go:117] "RemoveContainer" containerID="ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b" Jan 26 20:07:49 crc kubenswrapper[4787]: E0126 20:07:49.122700 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b\": container with ID starting with ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b not found: ID does not exist" containerID="ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.122749 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b"} err="failed to get container status \"ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b\": rpc error: code = NotFound desc = could not find container \"ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b\": container with ID starting with ea44c66839e4f67f11cade75b9140f37929822f133e73c06b642f9130a5a906b not found: ID does not exist" Jan 26 20:07:49 crc kubenswrapper[4787]: I0126 20:07:49.615351 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" path="/var/lib/kubelet/pods/12141ef3-dfe5-456a-b93a-921bccfa11cb/volumes" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.429727 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbft9"] Jan 26 20:08:03 crc kubenswrapper[4787]: E0126 20:08:03.431276 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="extract-utilities" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.431302 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="extract-utilities" Jan 26 20:08:03 crc kubenswrapper[4787]: E0126 20:08:03.431339 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="extract-content" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.431352 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="extract-content" Jan 26 20:08:03 crc kubenswrapper[4787]: E0126 20:08:03.431377 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="registry-server" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.431389 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="registry-server" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.431813 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="12141ef3-dfe5-456a-b93a-921bccfa11cb" containerName="registry-server" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.434992 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.446793 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbft9"] Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.571646 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjjx\" (UniqueName: \"kubernetes.io/projected/f5de1b1c-95c2-4e89-b636-ba85159b5c28-kube-api-access-qbjjx\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.571820 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-catalog-content\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.571881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-utilities\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.674142 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjjx\" (UniqueName: \"kubernetes.io/projected/f5de1b1c-95c2-4e89-b636-ba85159b5c28-kube-api-access-qbjjx\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.674605 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-catalog-content\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.674650 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-utilities\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.675182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-utilities\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.675373 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-catalog-content\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.704052 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjjx\" (UniqueName: \"kubernetes.io/projected/f5de1b1c-95c2-4e89-b636-ba85159b5c28-kube-api-access-qbjjx\") pod \"certified-operators-dbft9\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:03 crc kubenswrapper[4787]: I0126 20:08:03.765897 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:04 crc kubenswrapper[4787]: I0126 20:08:04.317637 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbft9"] Jan 26 20:08:05 crc kubenswrapper[4787]: I0126 20:08:05.217471 4787 generic.go:334] "Generic (PLEG): container finished" podID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerID="6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd" exitCode=0 Jan 26 20:08:05 crc kubenswrapper[4787]: I0126 20:08:05.217751 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerDied","Data":"6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd"} Jan 26 20:08:05 crc kubenswrapper[4787]: I0126 20:08:05.217780 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerStarted","Data":"d552656784c00aa5feca4903b0c404d38ff076e3b077f229bc8e9bf41abdf034"} Jan 26 20:08:05 crc kubenswrapper[4787]: I0126 20:08:05.220841 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.228310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerStarted","Data":"6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac"} Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.625581 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lmxvh"] Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.630551 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.645434 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmxvh"] Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.739236 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-catalog-content\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.739293 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lfqw\" (UniqueName: \"kubernetes.io/projected/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-kube-api-access-8lfqw\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.739328 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-utilities\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.841079 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-catalog-content\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.841419 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lfqw\" (UniqueName: \"kubernetes.io/projected/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-kube-api-access-8lfqw\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.841537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-catalog-content\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.841695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-utilities\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.841930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-utilities\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.862997 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lfqw\" (UniqueName: \"kubernetes.io/projected/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-kube-api-access-8lfqw\") pod \"community-operators-lmxvh\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:06 crc kubenswrapper[4787]: I0126 20:08:06.954294 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:07 crc kubenswrapper[4787]: I0126 20:08:07.253363 4787 generic.go:334] "Generic (PLEG): container finished" podID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerID="6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac" exitCode=0 Jan 26 20:08:07 crc kubenswrapper[4787]: I0126 20:08:07.253796 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerDied","Data":"6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac"} Jan 26 20:08:07 crc kubenswrapper[4787]: I0126 20:08:07.576316 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmxvh"] Jan 26 20:08:08 crc kubenswrapper[4787]: I0126 20:08:08.265761 4787 generic.go:334] "Generic (PLEG): container finished" podID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerID="34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f" exitCode=0 Jan 26 20:08:08 crc kubenswrapper[4787]: I0126 20:08:08.265815 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerDied","Data":"34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f"} Jan 26 20:08:08 crc kubenswrapper[4787]: I0126 20:08:08.266219 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerStarted","Data":"4942955fc4ed7c27862c06699a704b22ea57a4f335ca190ff93a5fed7971c950"} Jan 26 20:08:08 crc kubenswrapper[4787]: I0126 20:08:08.269698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerStarted","Data":"ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a"} Jan 26 20:08:08 crc kubenswrapper[4787]: I0126 20:08:08.323050 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbft9" podStartSLOduration=2.874721391 podStartE2EDuration="5.323029818s" podCreationTimestamp="2026-01-26 20:08:03 +0000 UTC" firstStartedPulling="2026-01-26 20:08:05.220444139 +0000 UTC m=+8653.927580292" lastFinishedPulling="2026-01-26 20:08:07.668752586 +0000 UTC m=+8656.375888719" observedRunningTime="2026-01-26 20:08:08.312487679 +0000 UTC m=+8657.019623822" watchObservedRunningTime="2026-01-26 20:08:08.323029818 +0000 UTC m=+8657.030165971" Jan 26 20:08:09 crc kubenswrapper[4787]: I0126 20:08:09.285100 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerStarted","Data":"328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64"} Jan 26 20:08:10 crc kubenswrapper[4787]: I0126 20:08:10.306704 4787 generic.go:334] "Generic (PLEG): container finished" podID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerID="328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64" exitCode=0 Jan 26 20:08:10 crc kubenswrapper[4787]: I0126 20:08:10.306770 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerDied","Data":"328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64"} Jan 26 20:08:11 crc kubenswrapper[4787]: I0126 20:08:11.322587 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerStarted","Data":"1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338"} Jan 26 20:08:11 crc kubenswrapper[4787]: I0126 20:08:11.346290 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lmxvh" podStartSLOduration=2.8512217250000003 podStartE2EDuration="5.346275219s" podCreationTimestamp="2026-01-26 20:08:06 +0000 UTC" firstStartedPulling="2026-01-26 20:08:08.267883724 +0000 UTC m=+8656.975019857" lastFinishedPulling="2026-01-26 20:08:10.762937208 +0000 UTC m=+8659.470073351" observedRunningTime="2026-01-26 20:08:11.344304232 +0000 UTC m=+8660.051440365" watchObservedRunningTime="2026-01-26 20:08:11.346275219 +0000 UTC m=+8660.053411352" Jan 26 20:08:13 crc kubenswrapper[4787]: I0126 20:08:13.767178 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:13 crc kubenswrapper[4787]: I0126 20:08:13.767597 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:13 crc kubenswrapper[4787]: I0126 20:08:13.862055 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:14 crc kubenswrapper[4787]: I0126 20:08:14.454380 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:15 crc kubenswrapper[4787]: I0126 20:08:15.022609 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbft9"] Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.376217 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dbft9" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="registry-server" containerID="cri-o://ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a" gracePeriod=2 Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.808059 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.808559 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.808610 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.809630 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f6043ca920f870f78f18f8fcbd8dba77af8b649dcc965e5893af257769da443"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.809695 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://1f6043ca920f870f78f18f8fcbd8dba77af8b649dcc965e5893af257769da443" gracePeriod=600 Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.955349 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:16 crc kubenswrapper[4787]: I0126 20:08:16.955390 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.065301 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.281216 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.393312 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="1f6043ca920f870f78f18f8fcbd8dba77af8b649dcc965e5893af257769da443" exitCode=0 Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.393392 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"1f6043ca920f870f78f18f8fcbd8dba77af8b649dcc965e5893af257769da443"} Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.393455 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb"} Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.393480 4787 scope.go:117] "RemoveContainer" containerID="1338d26e60520874523f78ae0e4b08134caced3e2b1298ea3a8fbbe0f46e881d" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.396835 4787 generic.go:334] "Generic (PLEG): container finished" podID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerID="ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a" exitCode=0 Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.397004 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerDied","Data":"ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a"} Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.397037 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbft9" event={"ID":"f5de1b1c-95c2-4e89-b636-ba85159b5c28","Type":"ContainerDied","Data":"d552656784c00aa5feca4903b0c404d38ff076e3b077f229bc8e9bf41abdf034"} Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.397170 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbft9" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.409587 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbjjx\" (UniqueName: \"kubernetes.io/projected/f5de1b1c-95c2-4e89-b636-ba85159b5c28-kube-api-access-qbjjx\") pod \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.410470 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-catalog-content\") pod \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.410565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-utilities\") pod \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\" (UID: \"f5de1b1c-95c2-4e89-b636-ba85159b5c28\") " Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.412177 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-utilities" (OuterVolumeSpecName: "utilities") pod "f5de1b1c-95c2-4e89-b636-ba85159b5c28" (UID: "f5de1b1c-95c2-4e89-b636-ba85159b5c28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.417755 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5de1b1c-95c2-4e89-b636-ba85159b5c28-kube-api-access-qbjjx" (OuterVolumeSpecName: "kube-api-access-qbjjx") pod "f5de1b1c-95c2-4e89-b636-ba85159b5c28" (UID: "f5de1b1c-95c2-4e89-b636-ba85159b5c28"). InnerVolumeSpecName "kube-api-access-qbjjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.426851 4787 scope.go:117] "RemoveContainer" containerID="ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.448242 4787 scope.go:117] "RemoveContainer" containerID="6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.466905 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5de1b1c-95c2-4e89-b636-ba85159b5c28" (UID: "f5de1b1c-95c2-4e89-b636-ba85159b5c28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.473601 4787 scope.go:117] "RemoveContainer" containerID="6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.488069 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.503547 4787 scope.go:117] "RemoveContainer" containerID="ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a" Jan 26 20:08:17 crc kubenswrapper[4787]: E0126 20:08:17.504119 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a\": container with ID starting with ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a not found: ID does not exist" containerID="ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.504159 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a"} err="failed to get container status \"ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a\": rpc error: code = NotFound desc = could not find container \"ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a\": container with ID starting with ef877c78fed346b40f60d03dc22122a48446cb7c5e52c84f0c2444d22ea14d6a not found: ID does not exist" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.504187 4787 scope.go:117] "RemoveContainer" containerID="6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac" Jan 26 20:08:17 crc kubenswrapper[4787]: E0126 20:08:17.504511 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac\": container with ID starting with 6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac not found: ID does not exist" containerID="6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.504535 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac"} err="failed to get container status \"6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac\": rpc error: code = NotFound desc = could not find container \"6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac\": container with ID starting with 6d611127053e634b55c5112b0ceb9494bb5598bdb12987515dc404dd483a00ac not found: ID does not exist" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.504552 4787 scope.go:117] "RemoveContainer" containerID="6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd" Jan 26 20:08:17 crc kubenswrapper[4787]: E0126 20:08:17.504864 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd\": container with ID starting with 6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd not found: ID does not exist" containerID="6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.505090 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd"} err="failed to get container status \"6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd\": rpc error: code = NotFound desc = could not find container \"6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd\": container with ID starting with 6745316ce7cc6d91064925164573014841f300e99a9fc78b92d93b3c0ae10ebd not found: ID does not exist" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.513422 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.513455 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbjjx\" (UniqueName: \"kubernetes.io/projected/f5de1b1c-95c2-4e89-b636-ba85159b5c28-kube-api-access-qbjjx\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.513467 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5de1b1c-95c2-4e89-b636-ba85159b5c28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.719496 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbft9"] Jan 26 20:08:17 crc kubenswrapper[4787]: I0126 20:08:17.729132 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dbft9"] Jan 26 20:08:18 crc kubenswrapper[4787]: I0126 20:08:18.218271 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmxvh"] Jan 26 20:08:19 crc kubenswrapper[4787]: I0126 20:08:19.422222 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lmxvh" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="registry-server" containerID="cri-o://1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338" gracePeriod=2 Jan 26 20:08:19 crc kubenswrapper[4787]: I0126 20:08:19.608656 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" path="/var/lib/kubelet/pods/f5de1b1c-95c2-4e89-b636-ba85159b5c28/volumes" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.017076 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.078995 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lfqw\" (UniqueName: \"kubernetes.io/projected/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-kube-api-access-8lfqw\") pod \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.079075 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-utilities\") pod \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.079232 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-catalog-content\") pod \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\" (UID: \"c41ae5d3-3c14-4825-8dbf-b4887b3809e2\") " Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.080877 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-utilities" (OuterVolumeSpecName: "utilities") pod "c41ae5d3-3c14-4825-8dbf-b4887b3809e2" (UID: "c41ae5d3-3c14-4825-8dbf-b4887b3809e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.090126 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-kube-api-access-8lfqw" (OuterVolumeSpecName: "kube-api-access-8lfqw") pod "c41ae5d3-3c14-4825-8dbf-b4887b3809e2" (UID: "c41ae5d3-3c14-4825-8dbf-b4887b3809e2"). InnerVolumeSpecName "kube-api-access-8lfqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.189649 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lfqw\" (UniqueName: \"kubernetes.io/projected/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-kube-api-access-8lfqw\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.189692 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.361880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c41ae5d3-3c14-4825-8dbf-b4887b3809e2" (UID: "c41ae5d3-3c14-4825-8dbf-b4887b3809e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.394720 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41ae5d3-3c14-4825-8dbf-b4887b3809e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.437417 4787 generic.go:334] "Generic (PLEG): container finished" podID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerID="1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338" exitCode=0 Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.437474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerDied","Data":"1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338"} Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.437501 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmxvh" event={"ID":"c41ae5d3-3c14-4825-8dbf-b4887b3809e2","Type":"ContainerDied","Data":"4942955fc4ed7c27862c06699a704b22ea57a4f335ca190ff93a5fed7971c950"} Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.437517 4787 scope.go:117] "RemoveContainer" containerID="1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.437533 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmxvh" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.467284 4787 scope.go:117] "RemoveContainer" containerID="328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.494179 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmxvh"] Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.500808 4787 scope.go:117] "RemoveContainer" containerID="34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.504843 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lmxvh"] Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.556156 4787 scope.go:117] "RemoveContainer" containerID="1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338" Jan 26 20:08:20 crc kubenswrapper[4787]: E0126 20:08:20.560059 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338\": container with ID starting with 1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338 not found: ID does not exist" containerID="1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.560121 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338"} err="failed to get container status \"1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338\": rpc error: code = NotFound desc = could not find container \"1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338\": container with ID starting with 1c94ec1186e334595055e30afd2ea38a64996d1a03bd004f836043133b99b338 not found: ID does not exist" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.560152 4787 scope.go:117] "RemoveContainer" containerID="328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64" Jan 26 20:08:20 crc kubenswrapper[4787]: E0126 20:08:20.562471 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64\": container with ID starting with 328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64 not found: ID does not exist" containerID="328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.562529 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64"} err="failed to get container status \"328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64\": rpc error: code = NotFound desc = could not find container \"328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64\": container with ID starting with 328ec70660e1c40336a8d48d94e5ae7f5eeaf2aa09dfbb119c9213009c8a4e64 not found: ID does not exist" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.562636 4787 scope.go:117] "RemoveContainer" containerID="34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f" Jan 26 20:08:20 crc kubenswrapper[4787]: E0126 20:08:20.563200 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f\": container with ID starting with 34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f not found: ID does not exist" containerID="34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f" Jan 26 20:08:20 crc kubenswrapper[4787]: I0126 20:08:20.563232 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f"} err="failed to get container status \"34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f\": rpc error: code = NotFound desc = could not find container \"34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f\": container with ID starting with 34b1834e6cfc69f88c30b3d7acb1ff7854557bbe321dcf85961eda7070e88c4f not found: ID does not exist" Jan 26 20:08:21 crc kubenswrapper[4787]: I0126 20:08:21.606169 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" path="/var/lib/kubelet/pods/c41ae5d3-3c14-4825-8dbf-b4887b3809e2/volumes" Jan 26 20:08:47 crc kubenswrapper[4787]: I0126 20:08:47.763767 4787 generic.go:334] "Generic (PLEG): container finished" podID="c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" containerID="0c2af475f82595d8b06deed9972c8c476fdd35b9f81dca40af2236dd2683d3b1" exitCode=0 Jan 26 20:08:47 crc kubenswrapper[4787]: I0126 20:08:47.763996 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" event={"ID":"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca","Type":"ContainerDied","Data":"0c2af475f82595d8b06deed9972c8c476fdd35b9f81dca40af2236dd2683d3b1"} Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.249539 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350234 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-0\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-0\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350387 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ssh-key-openstack-cell1\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350452 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-1\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350472 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vstpn\" (UniqueName: \"kubernetes.io/projected/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-kube-api-access-vstpn\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350498 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-1\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350544 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ceph\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350589 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-1\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350641 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-0\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350658 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-combined-ca-bundle\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.350700 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-inventory\") pod \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\" (UID: \"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca\") " Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.356164 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-kube-api-access-vstpn" (OuterVolumeSpecName: "kube-api-access-vstpn") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "kube-api-access-vstpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.364140 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ceph" (OuterVolumeSpecName: "ceph") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.370275 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.381258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.384736 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.387018 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.398103 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-inventory" (OuterVolumeSpecName: "inventory") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.400913 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.406081 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.408104 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.428648 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" (UID: "c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453127 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453169 4787 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453182 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453196 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453208 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vstpn\" (UniqueName: \"kubernetes.io/projected/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-kube-api-access-vstpn\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453220 4787 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453232 4787 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453246 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453258 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453271 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.453284 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.791756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" event={"ID":"c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca","Type":"ContainerDied","Data":"28fb7dc11cf03dc18518754bff3563b160cddd2dc43a6e32966c3d25ec81e84a"} Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.792123 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28fb7dc11cf03dc18518754bff3563b160cddd2dc43a6e32966c3d25ec81e84a" Jan 26 20:08:49 crc kubenswrapper[4787]: I0126 20:08:49.791864 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.391805 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t9pcg"] Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432133 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="registry-server" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432155 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="registry-server" Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432180 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="extract-content" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432189 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="extract-content" Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432220 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="extract-utilities" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432227 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="extract-utilities" Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432250 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="extract-utilities" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432258 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="extract-utilities" Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432274 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="registry-server" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432280 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="registry-server" Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432306 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432313 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 26 20:09:58 crc kubenswrapper[4787]: E0126 20:09:58.432377 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="extract-content" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432383 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="extract-content" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432816 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41ae5d3-3c14-4825-8dbf-b4887b3809e2" containerName="registry-server" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432839 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5de1b1c-95c2-4e89-b636-ba85159b5c28" containerName="registry-server" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.432863 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.436068 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.466988 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9pcg"] Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.501497 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-utilities\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.501737 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-catalog-content\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.501805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54nbp\" (UniqueName: \"kubernetes.io/projected/38c8d39c-7ba5-4f41-b727-b37827d60a6f-kube-api-access-54nbp\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.604313 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54nbp\" (UniqueName: \"kubernetes.io/projected/38c8d39c-7ba5-4f41-b727-b37827d60a6f-kube-api-access-54nbp\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.604449 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-utilities\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.604541 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-catalog-content\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.605032 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-catalog-content\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.605138 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-utilities\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.624884 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54nbp\" (UniqueName: \"kubernetes.io/projected/38c8d39c-7ba5-4f41-b727-b37827d60a6f-kube-api-access-54nbp\") pod \"redhat-marketplace-t9pcg\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:58 crc kubenswrapper[4787]: I0126 20:09:58.771941 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:09:59 crc kubenswrapper[4787]: I0126 20:09:59.283829 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9pcg"] Jan 26 20:09:59 crc kubenswrapper[4787]: I0126 20:09:59.607553 4787 generic.go:334] "Generic (PLEG): container finished" podID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerID="91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0" exitCode=0 Jan 26 20:09:59 crc kubenswrapper[4787]: I0126 20:09:59.607859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerDied","Data":"91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0"} Jan 26 20:09:59 crc kubenswrapper[4787]: I0126 20:09:59.607880 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerStarted","Data":"d0f1d8d52fc12cc0c9033b4b62f0b72493572646d7c975fac111613a3122ada7"} Jan 26 20:10:00 crc kubenswrapper[4787]: I0126 20:10:00.621536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerStarted","Data":"227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239"} Jan 26 20:10:01 crc kubenswrapper[4787]: I0126 20:10:01.636252 4787 generic.go:334] "Generic (PLEG): container finished" podID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerID="227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239" exitCode=0 Jan 26 20:10:01 crc kubenswrapper[4787]: I0126 20:10:01.636346 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerDied","Data":"227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239"} Jan 26 20:10:02 crc kubenswrapper[4787]: I0126 20:10:02.652469 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerStarted","Data":"c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0"} Jan 26 20:10:02 crc kubenswrapper[4787]: I0126 20:10:02.674179 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t9pcg" podStartSLOduration=2.202568532 podStartE2EDuration="4.674148059s" podCreationTimestamp="2026-01-26 20:09:58 +0000 UTC" firstStartedPulling="2026-01-26 20:09:59.610166518 +0000 UTC m=+8768.317302651" lastFinishedPulling="2026-01-26 20:10:02.081746045 +0000 UTC m=+8770.788882178" observedRunningTime="2026-01-26 20:10:02.671228788 +0000 UTC m=+8771.378364961" watchObservedRunningTime="2026-01-26 20:10:02.674148059 +0000 UTC m=+8771.381284242" Jan 26 20:10:08 crc kubenswrapper[4787]: I0126 20:10:08.772512 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:10:08 crc kubenswrapper[4787]: I0126 20:10:08.773217 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:10:08 crc kubenswrapper[4787]: I0126 20:10:08.869544 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:10:10 crc kubenswrapper[4787]: I0126 20:10:10.347607 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:10:10 crc kubenswrapper[4787]: I0126 20:10:10.397461 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9pcg"] Jan 26 20:10:11 crc kubenswrapper[4787]: I0126 20:10:11.767048 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t9pcg" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="registry-server" containerID="cri-o://c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0" gracePeriod=2 Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.391025 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.454602 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54nbp\" (UniqueName: \"kubernetes.io/projected/38c8d39c-7ba5-4f41-b727-b37827d60a6f-kube-api-access-54nbp\") pod \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.454933 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-utilities\") pod \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.455210 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-catalog-content\") pod \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\" (UID: \"38c8d39c-7ba5-4f41-b727-b37827d60a6f\") " Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.455926 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-utilities" (OuterVolumeSpecName: "utilities") pod "38c8d39c-7ba5-4f41-b727-b37827d60a6f" (UID: "38c8d39c-7ba5-4f41-b727-b37827d60a6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.471314 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c8d39c-7ba5-4f41-b727-b37827d60a6f-kube-api-access-54nbp" (OuterVolumeSpecName: "kube-api-access-54nbp") pod "38c8d39c-7ba5-4f41-b727-b37827d60a6f" (UID: "38c8d39c-7ba5-4f41-b727-b37827d60a6f"). InnerVolumeSpecName "kube-api-access-54nbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.491406 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38c8d39c-7ba5-4f41-b727-b37827d60a6f" (UID: "38c8d39c-7ba5-4f41-b727-b37827d60a6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.558109 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.558150 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54nbp\" (UniqueName: \"kubernetes.io/projected/38c8d39c-7ba5-4f41-b727-b37827d60a6f-kube-api-access-54nbp\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.558166 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c8d39c-7ba5-4f41-b727-b37827d60a6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.787150 4787 generic.go:334] "Generic (PLEG): container finished" podID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerID="c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0" exitCode=0 Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.787229 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerDied","Data":"c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0"} Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.787280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9pcg" event={"ID":"38c8d39c-7ba5-4f41-b727-b37827d60a6f","Type":"ContainerDied","Data":"d0f1d8d52fc12cc0c9033b4b62f0b72493572646d7c975fac111613a3122ada7"} Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.787309 4787 scope.go:117] "RemoveContainer" containerID="c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.787403 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9pcg" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.840504 4787 scope.go:117] "RemoveContainer" containerID="227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.846590 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9pcg"] Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.862269 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9pcg"] Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.871761 4787 scope.go:117] "RemoveContainer" containerID="91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.931116 4787 scope.go:117] "RemoveContainer" containerID="c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0" Jan 26 20:10:12 crc kubenswrapper[4787]: E0126 20:10:12.931875 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0\": container with ID starting with c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0 not found: ID does not exist" containerID="c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.931914 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0"} err="failed to get container status \"c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0\": rpc error: code = NotFound desc = could not find container \"c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0\": container with ID starting with c89593c2194f17775d53e599fed3513974994cc42ba73e3a0bbd40550bd4f7f0 not found: ID does not exist" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.931974 4787 scope.go:117] "RemoveContainer" containerID="227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239" Jan 26 20:10:12 crc kubenswrapper[4787]: E0126 20:10:12.932519 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239\": container with ID starting with 227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239 not found: ID does not exist" containerID="227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.932564 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239"} err="failed to get container status \"227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239\": rpc error: code = NotFound desc = could not find container \"227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239\": container with ID starting with 227333674f452a2b8161252e97905ffa0a93564cb10602c4d449ed051e0da239 not found: ID does not exist" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.932581 4787 scope.go:117] "RemoveContainer" containerID="91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0" Jan 26 20:10:12 crc kubenswrapper[4787]: E0126 20:10:12.933407 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0\": container with ID starting with 91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0 not found: ID does not exist" containerID="91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0" Jan 26 20:10:12 crc kubenswrapper[4787]: I0126 20:10:12.933453 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0"} err="failed to get container status \"91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0\": rpc error: code = NotFound desc = could not find container \"91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0\": container with ID starting with 91fd6a36d55b36ae73f6618b2040fb61b1363330491beacda7c6047b12faf0c0 not found: ID does not exist" Jan 26 20:10:13 crc kubenswrapper[4787]: I0126 20:10:13.611878 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" path="/var/lib/kubelet/pods/38c8d39c-7ba5-4f41-b727-b37827d60a6f/volumes" Jan 26 20:10:46 crc kubenswrapper[4787]: I0126 20:10:46.807269 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:10:46 crc kubenswrapper[4787]: I0126 20:10:46.808652 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:10:59 crc kubenswrapper[4787]: I0126 20:10:59.339541 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Jan 26 20:10:59 crc kubenswrapper[4787]: I0126 20:10:59.340329 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="1f03be67-34a7-411e-ae84-cbad607741f2" containerName="adoption" containerID="cri-o://b65fb13bc25f2784797256942b8c740673e6157c36ce4f622f404e9395a54d5a" gracePeriod=30 Jan 26 20:11:16 crc kubenswrapper[4787]: I0126 20:11:16.807795 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:11:16 crc kubenswrapper[4787]: I0126 20:11:16.808549 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:11:29 crc kubenswrapper[4787]: I0126 20:11:29.792461 4787 generic.go:334] "Generic (PLEG): container finished" podID="1f03be67-34a7-411e-ae84-cbad607741f2" containerID="b65fb13bc25f2784797256942b8c740673e6157c36ce4f622f404e9395a54d5a" exitCode=137 Jan 26 20:11:29 crc kubenswrapper[4787]: I0126 20:11:29.792600 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1f03be67-34a7-411e-ae84-cbad607741f2","Type":"ContainerDied","Data":"b65fb13bc25f2784797256942b8c740673e6157c36ce4f622f404e9395a54d5a"} Jan 26 20:11:29 crc kubenswrapper[4787]: I0126 20:11:29.951348 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.077865 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") pod \"1f03be67-34a7-411e-ae84-cbad607741f2\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.078158 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9pwv\" (UniqueName: \"kubernetes.io/projected/1f03be67-34a7-411e-ae84-cbad607741f2-kube-api-access-n9pwv\") pod \"1f03be67-34a7-411e-ae84-cbad607741f2\" (UID: \"1f03be67-34a7-411e-ae84-cbad607741f2\") " Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.086976 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f03be67-34a7-411e-ae84-cbad607741f2-kube-api-access-n9pwv" (OuterVolumeSpecName: "kube-api-access-n9pwv") pod "1f03be67-34a7-411e-ae84-cbad607741f2" (UID: "1f03be67-34a7-411e-ae84-cbad607741f2"). InnerVolumeSpecName "kube-api-access-n9pwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.100528 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0" (OuterVolumeSpecName: "mariadb-data") pod "1f03be67-34a7-411e-ae84-cbad607741f2" (UID: "1f03be67-34a7-411e-ae84-cbad607741f2"). InnerVolumeSpecName "pvc-fb489be6-d81b-4825-998f-703f00435dc0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.182447 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9pwv\" (UniqueName: \"kubernetes.io/projected/1f03be67-34a7-411e-ae84-cbad607741f2-kube-api-access-n9pwv\") on node \"crc\" DevicePath \"\"" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.182509 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fb489be6-d81b-4825-998f-703f00435dc0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") on node \"crc\" " Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.217412 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.217582 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fb489be6-d81b-4825-998f-703f00435dc0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0") on node "crc" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.284367 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-fb489be6-d81b-4825-998f-703f00435dc0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb489be6-d81b-4825-998f-703f00435dc0\") on node \"crc\" DevicePath \"\"" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.806148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1f03be67-34a7-411e-ae84-cbad607741f2","Type":"ContainerDied","Data":"a3f80a9ab8f2599b2442fb0227e8ae606b6f6e85e6e5131122029fce07dc5d62"} Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.806196 4787 scope.go:117] "RemoveContainer" containerID="b65fb13bc25f2784797256942b8c740673e6157c36ce4f622f404e9395a54d5a" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.806225 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.842042 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Jan 26 20:11:30 crc kubenswrapper[4787]: I0126 20:11:30.850450 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Jan 26 20:11:31 crc kubenswrapper[4787]: I0126 20:11:31.605397 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f03be67-34a7-411e-ae84-cbad607741f2" path="/var/lib/kubelet/pods/1f03be67-34a7-411e-ae84-cbad607741f2/volumes" Jan 26 20:11:31 crc kubenswrapper[4787]: I0126 20:11:31.699763 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Jan 26 20:11:31 crc kubenswrapper[4787]: I0126 20:11:31.701243 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="ba7afaee-0b36-4d28-b0e7-929cb87aac6c" containerName="adoption" containerID="cri-o://d84935215f791bb70771441dd72a2579d55f82085e4fa44a39d970aaae5c6c38" gracePeriod=30 Jan 26 20:11:46 crc kubenswrapper[4787]: I0126 20:11:46.814149 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:11:46 crc kubenswrapper[4787]: I0126 20:11:46.815028 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:11:46 crc kubenswrapper[4787]: I0126 20:11:46.815093 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 20:11:46 crc kubenswrapper[4787]: I0126 20:11:46.816229 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:11:46 crc kubenswrapper[4787]: I0126 20:11:46.816310 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" gracePeriod=600 Jan 26 20:11:46 crc kubenswrapper[4787]: E0126 20:11:46.965537 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:11:47 crc kubenswrapper[4787]: I0126 20:11:47.031992 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" exitCode=0 Jan 26 20:11:47 crc kubenswrapper[4787]: I0126 20:11:47.032038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb"} Jan 26 20:11:47 crc kubenswrapper[4787]: I0126 20:11:47.032072 4787 scope.go:117] "RemoveContainer" containerID="1f6043ca920f870f78f18f8fcbd8dba77af8b649dcc965e5893af257769da443" Jan 26 20:11:47 crc kubenswrapper[4787]: I0126 20:11:47.032810 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:11:47 crc kubenswrapper[4787]: E0126 20:11:47.033123 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:12:01 crc kubenswrapper[4787]: I0126 20:12:01.595459 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:12:01 crc kubenswrapper[4787]: E0126 20:12:01.596403 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.216936 4787 generic.go:334] "Generic (PLEG): container finished" podID="ba7afaee-0b36-4d28-b0e7-929cb87aac6c" containerID="d84935215f791bb70771441dd72a2579d55f82085e4fa44a39d970aaae5c6c38" exitCode=137 Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.217074 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ba7afaee-0b36-4d28-b0e7-929cb87aac6c","Type":"ContainerDied","Data":"d84935215f791bb70771441dd72a2579d55f82085e4fa44a39d970aaae5c6c38"} Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.217242 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ba7afaee-0b36-4d28-b0e7-929cb87aac6c","Type":"ContainerDied","Data":"c27da389f3fe92c5abf74c9f7867316741bfa92078af85b0a914d16b2072422f"} Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.217255 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27da389f3fe92c5abf74c9f7867316741bfa92078af85b0a914d16b2072422f" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.260737 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.419713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-ovn-data-cert\") pod \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.419897 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97l64\" (UniqueName: \"kubernetes.io/projected/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-kube-api-access-97l64\") pod \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.421162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") pod \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\" (UID: \"ba7afaee-0b36-4d28-b0e7-929cb87aac6c\") " Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.427883 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "ba7afaee-0b36-4d28-b0e7-929cb87aac6c" (UID: "ba7afaee-0b36-4d28-b0e7-929cb87aac6c"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.429107 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-kube-api-access-97l64" (OuterVolumeSpecName: "kube-api-access-97l64") pod "ba7afaee-0b36-4d28-b0e7-929cb87aac6c" (UID: "ba7afaee-0b36-4d28-b0e7-929cb87aac6c"). InnerVolumeSpecName "kube-api-access-97l64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.447203 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f" (OuterVolumeSpecName: "ovn-data") pod "ba7afaee-0b36-4d28-b0e7-929cb87aac6c" (UID: "ba7afaee-0b36-4d28-b0e7-929cb87aac6c"). InnerVolumeSpecName "pvc-41145ebd-8c73-445c-b980-b07ef122220f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.524839 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97l64\" (UniqueName: \"kubernetes.io/projected/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-kube-api-access-97l64\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.525042 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41145ebd-8c73-445c-b980-b07ef122220f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") on node \"crc\" " Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.527186 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ba7afaee-0b36-4d28-b0e7-929cb87aac6c-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.568199 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.568383 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41145ebd-8c73-445c-b980-b07ef122220f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f") on node "crc" Jan 26 20:12:02 crc kubenswrapper[4787]: I0126 20:12:02.629341 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-41145ebd-8c73-445c-b980-b07ef122220f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41145ebd-8c73-445c-b980-b07ef122220f\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:03 crc kubenswrapper[4787]: I0126 20:12:03.228386 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 26 20:12:03 crc kubenswrapper[4787]: I0126 20:12:03.284852 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Jan 26 20:12:03 crc kubenswrapper[4787]: I0126 20:12:03.297478 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Jan 26 20:12:03 crc kubenswrapper[4787]: I0126 20:12:03.604071 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7afaee-0b36-4d28-b0e7-929cb87aac6c" path="/var/lib/kubelet/pods/ba7afaee-0b36-4d28-b0e7-929cb87aac6c/volumes" Jan 26 20:12:05 crc kubenswrapper[4787]: I0126 20:12:05.041389 4787 scope.go:117] "RemoveContainer" containerID="d84935215f791bb70771441dd72a2579d55f82085e4fa44a39d970aaae5c6c38" Jan 26 20:12:13 crc kubenswrapper[4787]: I0126 20:12:13.590184 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:12:13 crc kubenswrapper[4787]: E0126 20:12:13.590938 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:12:25 crc kubenswrapper[4787]: I0126 20:12:25.590125 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:12:25 crc kubenswrapper[4787]: E0126 20:12:25.591154 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:12:36 crc kubenswrapper[4787]: I0126 20:12:36.590070 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:12:36 crc kubenswrapper[4787]: E0126 20:12:36.591302 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:12:48 crc kubenswrapper[4787]: I0126 20:12:48.589573 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:12:48 crc kubenswrapper[4787]: E0126 20:12:48.590529 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:13:02 crc kubenswrapper[4787]: I0126 20:13:02.590021 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:13:02 crc kubenswrapper[4787]: E0126 20:13:02.590887 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.390900 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pj5x9/must-gather-h48jx"] Jan 26 20:13:10 crc kubenswrapper[4787]: E0126 20:13:10.393245 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7afaee-0b36-4d28-b0e7-929cb87aac6c" containerName="adoption" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.393364 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7afaee-0b36-4d28-b0e7-929cb87aac6c" containerName="adoption" Jan 26 20:13:10 crc kubenswrapper[4787]: E0126 20:13:10.393456 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f03be67-34a7-411e-ae84-cbad607741f2" containerName="adoption" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.393523 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f03be67-34a7-411e-ae84-cbad607741f2" containerName="adoption" Jan 26 20:13:10 crc kubenswrapper[4787]: E0126 20:13:10.393602 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="extract-content" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.393784 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="extract-content" Jan 26 20:13:10 crc kubenswrapper[4787]: E0126 20:13:10.394000 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="extract-utilities" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.394088 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="extract-utilities" Jan 26 20:13:10 crc kubenswrapper[4787]: E0126 20:13:10.394193 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="registry-server" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.394274 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="registry-server" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.394639 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f03be67-34a7-411e-ae84-cbad607741f2" containerName="adoption" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.394753 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c8d39c-7ba5-4f41-b727-b37827d60a6f" containerName="registry-server" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.394860 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7afaee-0b36-4d28-b0e7-929cb87aac6c" containerName="adoption" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.396713 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.399813 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pj5x9"/"kube-root-ca.crt" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.399838 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pj5x9"/"openshift-service-ca.crt" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.410127 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pj5x9/must-gather-h48jx"] Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.562869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw78j\" (UniqueName: \"kubernetes.io/projected/5531bf5d-fded-4e95-836c-42bc930460b7-kube-api-access-rw78j\") pod \"must-gather-h48jx\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.563145 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5531bf5d-fded-4e95-836c-42bc930460b7-must-gather-output\") pod \"must-gather-h48jx\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.665222 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5531bf5d-fded-4e95-836c-42bc930460b7-must-gather-output\") pod \"must-gather-h48jx\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.665331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw78j\" (UniqueName: \"kubernetes.io/projected/5531bf5d-fded-4e95-836c-42bc930460b7-kube-api-access-rw78j\") pod \"must-gather-h48jx\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.665917 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5531bf5d-fded-4e95-836c-42bc930460b7-must-gather-output\") pod \"must-gather-h48jx\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.693930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw78j\" (UniqueName: \"kubernetes.io/projected/5531bf5d-fded-4e95-836c-42bc930460b7-kube-api-access-rw78j\") pod \"must-gather-h48jx\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:10 crc kubenswrapper[4787]: I0126 20:13:10.718903 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:13:11 crc kubenswrapper[4787]: I0126 20:13:11.275936 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pj5x9/must-gather-h48jx"] Jan 26 20:13:11 crc kubenswrapper[4787]: I0126 20:13:11.278610 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:13:12 crc kubenswrapper[4787]: I0126 20:13:12.078278 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/must-gather-h48jx" event={"ID":"5531bf5d-fded-4e95-836c-42bc930460b7","Type":"ContainerStarted","Data":"3582d7959f7f824024a09a33780447eaeca4d6663e916cc0fe25f23130b98dc3"} Jan 26 20:13:14 crc kubenswrapper[4787]: I0126 20:13:14.589609 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:13:14 crc kubenswrapper[4787]: E0126 20:13:14.590622 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:13:19 crc kubenswrapper[4787]: I0126 20:13:19.170505 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/must-gather-h48jx" event={"ID":"5531bf5d-fded-4e95-836c-42bc930460b7","Type":"ContainerStarted","Data":"493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403"} Jan 26 20:13:19 crc kubenswrapper[4787]: I0126 20:13:19.171119 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/must-gather-h48jx" event={"ID":"5531bf5d-fded-4e95-836c-42bc930460b7","Type":"ContainerStarted","Data":"57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44"} Jan 26 20:13:19 crc kubenswrapper[4787]: I0126 20:13:19.220127 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pj5x9/must-gather-h48jx" podStartSLOduration=2.299628047 podStartE2EDuration="9.220094613s" podCreationTimestamp="2026-01-26 20:13:10 +0000 UTC" firstStartedPulling="2026-01-26 20:13:11.278565487 +0000 UTC m=+8959.985701620" lastFinishedPulling="2026-01-26 20:13:18.199032053 +0000 UTC m=+8966.906168186" observedRunningTime="2026-01-26 20:13:19.197408667 +0000 UTC m=+8967.904544820" watchObservedRunningTime="2026-01-26 20:13:19.220094613 +0000 UTC m=+8967.927230806" Jan 26 20:13:22 crc kubenswrapper[4787]: I0126 20:13:22.910165 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pj5x9/crc-debug-fdsg7"] Jan 26 20:13:22 crc kubenswrapper[4787]: I0126 20:13:22.912400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:22 crc kubenswrapper[4787]: I0126 20:13:22.915149 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pj5x9"/"default-dockercfg-fn6hz" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.010588 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljq5z\" (UniqueName: \"kubernetes.io/projected/e9f7de7a-854f-4a5b-b831-0e073f8a2675-kube-api-access-ljq5z\") pod \"crc-debug-fdsg7\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.010688 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9f7de7a-854f-4a5b-b831-0e073f8a2675-host\") pod \"crc-debug-fdsg7\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.111319 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljq5z\" (UniqueName: \"kubernetes.io/projected/e9f7de7a-854f-4a5b-b831-0e073f8a2675-kube-api-access-ljq5z\") pod \"crc-debug-fdsg7\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.111374 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9f7de7a-854f-4a5b-b831-0e073f8a2675-host\") pod \"crc-debug-fdsg7\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.111538 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9f7de7a-854f-4a5b-b831-0e073f8a2675-host\") pod \"crc-debug-fdsg7\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.146623 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljq5z\" (UniqueName: \"kubernetes.io/projected/e9f7de7a-854f-4a5b-b831-0e073f8a2675-kube-api-access-ljq5z\") pod \"crc-debug-fdsg7\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: I0126 20:13:23.230229 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:23 crc kubenswrapper[4787]: W0126 20:13:23.278572 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9f7de7a_854f_4a5b_b831_0e073f8a2675.slice/crio-9dc1b1cdcf60fe3ceeb4de911f1c93c011cf5146a8df78d2163239982eb68934 WatchSource:0}: Error finding container 9dc1b1cdcf60fe3ceeb4de911f1c93c011cf5146a8df78d2163239982eb68934: Status 404 returned error can't find the container with id 9dc1b1cdcf60fe3ceeb4de911f1c93c011cf5146a8df78d2163239982eb68934 Jan 26 20:13:24 crc kubenswrapper[4787]: I0126 20:13:24.250077 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" event={"ID":"e9f7de7a-854f-4a5b-b831-0e073f8a2675","Type":"ContainerStarted","Data":"9dc1b1cdcf60fe3ceeb4de911f1c93c011cf5146a8df78d2163239982eb68934"} Jan 26 20:13:24 crc kubenswrapper[4787]: E0126 20:13:24.258860 4787 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.69:42264->38.102.83.69:41761: read tcp 38.102.83.69:42264->38.102.83.69:41761: read: connection reset by peer Jan 26 20:13:27 crc kubenswrapper[4787]: I0126 20:13:27.589645 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:13:27 crc kubenswrapper[4787]: E0126 20:13:27.590329 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:13:35 crc kubenswrapper[4787]: I0126 20:13:35.373878 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" event={"ID":"e9f7de7a-854f-4a5b-b831-0e073f8a2675","Type":"ContainerStarted","Data":"0b68f2a95b2245e619d4c5f1fcb27869af105daace29f9ad5ce40fbfc5c015c4"} Jan 26 20:13:35 crc kubenswrapper[4787]: I0126 20:13:35.395539 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" podStartSLOduration=1.82469767 podStartE2EDuration="13.395516997s" podCreationTimestamp="2026-01-26 20:13:22 +0000 UTC" firstStartedPulling="2026-01-26 20:13:23.284791955 +0000 UTC m=+8971.991928088" lastFinishedPulling="2026-01-26 20:13:34.855611262 +0000 UTC m=+8983.562747415" observedRunningTime="2026-01-26 20:13:35.38503089 +0000 UTC m=+8984.092167043" watchObservedRunningTime="2026-01-26 20:13:35.395516997 +0000 UTC m=+8984.102653140" Jan 26 20:13:40 crc kubenswrapper[4787]: I0126 20:13:40.590463 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:13:40 crc kubenswrapper[4787]: E0126 20:13:40.591903 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:13:52 crc kubenswrapper[4787]: I0126 20:13:52.589472 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:13:52 crc kubenswrapper[4787]: E0126 20:13:52.590414 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:13:56 crc kubenswrapper[4787]: I0126 20:13:56.560034 4787 generic.go:334] "Generic (PLEG): container finished" podID="e9f7de7a-854f-4a5b-b831-0e073f8a2675" containerID="0b68f2a95b2245e619d4c5f1fcb27869af105daace29f9ad5ce40fbfc5c015c4" exitCode=0 Jan 26 20:13:56 crc kubenswrapper[4787]: I0126 20:13:56.560113 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" event={"ID":"e9f7de7a-854f-4a5b-b831-0e073f8a2675","Type":"ContainerDied","Data":"0b68f2a95b2245e619d4c5f1fcb27869af105daace29f9ad5ce40fbfc5c015c4"} Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.729138 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.746938 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9f7de7a-854f-4a5b-b831-0e073f8a2675-host\") pod \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.747075 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljq5z\" (UniqueName: \"kubernetes.io/projected/e9f7de7a-854f-4a5b-b831-0e073f8a2675-kube-api-access-ljq5z\") pod \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\" (UID: \"e9f7de7a-854f-4a5b-b831-0e073f8a2675\") " Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.747320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9f7de7a-854f-4a5b-b831-0e073f8a2675-host" (OuterVolumeSpecName: "host") pod "e9f7de7a-854f-4a5b-b831-0e073f8a2675" (UID: "e9f7de7a-854f-4a5b-b831-0e073f8a2675"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.765312 4787 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9f7de7a-854f-4a5b-b831-0e073f8a2675-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.778088 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f7de7a-854f-4a5b-b831-0e073f8a2675-kube-api-access-ljq5z" (OuterVolumeSpecName: "kube-api-access-ljq5z") pod "e9f7de7a-854f-4a5b-b831-0e073f8a2675" (UID: "e9f7de7a-854f-4a5b-b831-0e073f8a2675"). InnerVolumeSpecName "kube-api-access-ljq5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.809279 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pj5x9/crc-debug-fdsg7"] Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.825061 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pj5x9/crc-debug-fdsg7"] Jan 26 20:13:57 crc kubenswrapper[4787]: I0126 20:13:57.869868 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljq5z\" (UniqueName: \"kubernetes.io/projected/e9f7de7a-854f-4a5b-b831-0e073f8a2675-kube-api-access-ljq5z\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.581886 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc1b1cdcf60fe3ceeb4de911f1c93c011cf5146a8df78d2163239982eb68934" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.582020 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-fdsg7" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.982808 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pj5x9/crc-debug-vgtcw"] Jan 26 20:13:58 crc kubenswrapper[4787]: E0126 20:13:58.983358 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f7de7a-854f-4a5b-b831-0e073f8a2675" containerName="container-00" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.983373 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f7de7a-854f-4a5b-b831-0e073f8a2675" containerName="container-00" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.983633 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f7de7a-854f-4a5b-b831-0e073f8a2675" containerName="container-00" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.984603 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:58 crc kubenswrapper[4787]: I0126 20:13:58.988391 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pj5x9"/"default-dockercfg-fn6hz" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.095416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0380db30-8b2c-4a36-9d00-14a6057b451e-host\") pod \"crc-debug-vgtcw\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.095621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhg2k\" (UniqueName: \"kubernetes.io/projected/0380db30-8b2c-4a36-9d00-14a6057b451e-kube-api-access-dhg2k\") pod \"crc-debug-vgtcw\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.197527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhg2k\" (UniqueName: \"kubernetes.io/projected/0380db30-8b2c-4a36-9d00-14a6057b451e-kube-api-access-dhg2k\") pod \"crc-debug-vgtcw\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.197607 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0380db30-8b2c-4a36-9d00-14a6057b451e-host\") pod \"crc-debug-vgtcw\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.197739 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0380db30-8b2c-4a36-9d00-14a6057b451e-host\") pod \"crc-debug-vgtcw\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.216686 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhg2k\" (UniqueName: \"kubernetes.io/projected/0380db30-8b2c-4a36-9d00-14a6057b451e-kube-api-access-dhg2k\") pod \"crc-debug-vgtcw\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.311767 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.605093 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f7de7a-854f-4a5b-b831-0e073f8a2675" path="/var/lib/kubelet/pods/e9f7de7a-854f-4a5b-b831-0e073f8a2675/volumes" Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.605706 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" event={"ID":"0380db30-8b2c-4a36-9d00-14a6057b451e","Type":"ContainerStarted","Data":"236c93aacf7e2829b6cff336cb74bd169220cdb44d0f743233ff4252e0cec99d"} Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.605733 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" event={"ID":"0380db30-8b2c-4a36-9d00-14a6057b451e","Type":"ContainerStarted","Data":"e894a5c4fa1508ed881d26a0c9b45ac8f17b0420e032dca91421fc027892e60d"} Jan 26 20:13:59 crc kubenswrapper[4787]: I0126 20:13:59.623317 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" podStartSLOduration=1.623298028 podStartE2EDuration="1.623298028s" podCreationTimestamp="2026-01-26 20:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:13:59.618137972 +0000 UTC m=+9008.325274105" watchObservedRunningTime="2026-01-26 20:13:59.623298028 +0000 UTC m=+9008.330434161" Jan 26 20:14:00 crc kubenswrapper[4787]: I0126 20:14:00.605002 4787 generic.go:334] "Generic (PLEG): container finished" podID="0380db30-8b2c-4a36-9d00-14a6057b451e" containerID="236c93aacf7e2829b6cff336cb74bd169220cdb44d0f743233ff4252e0cec99d" exitCode=1 Jan 26 20:14:00 crc kubenswrapper[4787]: I0126 20:14:00.605092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" event={"ID":"0380db30-8b2c-4a36-9d00-14a6057b451e","Type":"ContainerDied","Data":"236c93aacf7e2829b6cff336cb74bd169220cdb44d0f743233ff4252e0cec99d"} Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.912434 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.948250 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pj5x9/crc-debug-vgtcw"] Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.956514 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pj5x9/crc-debug-vgtcw"] Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.963543 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0380db30-8b2c-4a36-9d00-14a6057b451e-host\") pod \"0380db30-8b2c-4a36-9d00-14a6057b451e\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.963646 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhg2k\" (UniqueName: \"kubernetes.io/projected/0380db30-8b2c-4a36-9d00-14a6057b451e-kube-api-access-dhg2k\") pod \"0380db30-8b2c-4a36-9d00-14a6057b451e\" (UID: \"0380db30-8b2c-4a36-9d00-14a6057b451e\") " Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.963665 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0380db30-8b2c-4a36-9d00-14a6057b451e-host" (OuterVolumeSpecName: "host") pod "0380db30-8b2c-4a36-9d00-14a6057b451e" (UID: "0380db30-8b2c-4a36-9d00-14a6057b451e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.964207 4787 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0380db30-8b2c-4a36-9d00-14a6057b451e-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:14:01 crc kubenswrapper[4787]: I0126 20:14:01.976159 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0380db30-8b2c-4a36-9d00-14a6057b451e-kube-api-access-dhg2k" (OuterVolumeSpecName: "kube-api-access-dhg2k") pod "0380db30-8b2c-4a36-9d00-14a6057b451e" (UID: "0380db30-8b2c-4a36-9d00-14a6057b451e"). InnerVolumeSpecName "kube-api-access-dhg2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:14:02 crc kubenswrapper[4787]: I0126 20:14:02.065687 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhg2k\" (UniqueName: \"kubernetes.io/projected/0380db30-8b2c-4a36-9d00-14a6057b451e-kube-api-access-dhg2k\") on node \"crc\" DevicePath \"\"" Jan 26 20:14:02 crc kubenswrapper[4787]: I0126 20:14:02.630022 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e894a5c4fa1508ed881d26a0c9b45ac8f17b0420e032dca91421fc027892e60d" Jan 26 20:14:02 crc kubenswrapper[4787]: I0126 20:14:02.630176 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/crc-debug-vgtcw" Jan 26 20:14:03 crc kubenswrapper[4787]: I0126 20:14:03.604662 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0380db30-8b2c-4a36-9d00-14a6057b451e" path="/var/lib/kubelet/pods/0380db30-8b2c-4a36-9d00-14a6057b451e/volumes" Jan 26 20:14:07 crc kubenswrapper[4787]: I0126 20:14:07.590592 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:14:07 crc kubenswrapper[4787]: E0126 20:14:07.591332 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:14:20 crc kubenswrapper[4787]: I0126 20:14:20.589371 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:14:20 crc kubenswrapper[4787]: E0126 20:14:20.590158 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:14:35 crc kubenswrapper[4787]: I0126 20:14:35.590756 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:14:35 crc kubenswrapper[4787]: E0126 20:14:35.592109 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:14:46 crc kubenswrapper[4787]: I0126 20:14:46.588919 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:14:46 crc kubenswrapper[4787]: E0126 20:14:46.589804 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.172403 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48"] Jan 26 20:15:00 crc kubenswrapper[4787]: E0126 20:15:00.186632 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0380db30-8b2c-4a36-9d00-14a6057b451e" containerName="container-00" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.186688 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0380db30-8b2c-4a36-9d00-14a6057b451e" containerName="container-00" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.187785 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0380db30-8b2c-4a36-9d00-14a6057b451e" containerName="container-00" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.190088 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.193468 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.193703 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.208177 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48"] Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.349071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q6bk\" (UniqueName: \"kubernetes.io/projected/7295ea12-fb51-4ccc-b56b-f811eb29160e-kube-api-access-4q6bk\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.349155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7295ea12-fb51-4ccc-b56b-f811eb29160e-secret-volume\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.349217 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7295ea12-fb51-4ccc-b56b-f811eb29160e-config-volume\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.451232 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q6bk\" (UniqueName: \"kubernetes.io/projected/7295ea12-fb51-4ccc-b56b-f811eb29160e-kube-api-access-4q6bk\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.452375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7295ea12-fb51-4ccc-b56b-f811eb29160e-secret-volume\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.452575 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7295ea12-fb51-4ccc-b56b-f811eb29160e-config-volume\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.454369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7295ea12-fb51-4ccc-b56b-f811eb29160e-config-volume\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.459046 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7295ea12-fb51-4ccc-b56b-f811eb29160e-secret-volume\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.469017 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q6bk\" (UniqueName: \"kubernetes.io/projected/7295ea12-fb51-4ccc-b56b-f811eb29160e-kube-api-access-4q6bk\") pod \"collect-profiles-29490975-sdq48\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:00 crc kubenswrapper[4787]: I0126 20:15:00.520261 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:01 crc kubenswrapper[4787]: I0126 20:15:01.207780 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48"] Jan 26 20:15:01 crc kubenswrapper[4787]: I0126 20:15:01.613812 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:15:01 crc kubenswrapper[4787]: E0126 20:15:01.614484 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:15:02 crc kubenswrapper[4787]: I0126 20:15:02.340509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" event={"ID":"7295ea12-fb51-4ccc-b56b-f811eb29160e","Type":"ContainerStarted","Data":"12af0a8294b30ceb67992b8da3b2d8d6a834d335398b0a74f854a0412974f85e"} Jan 26 20:15:02 crc kubenswrapper[4787]: I0126 20:15:02.340772 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" event={"ID":"7295ea12-fb51-4ccc-b56b-f811eb29160e","Type":"ContainerStarted","Data":"adf546fcad6630b6d5fa27e5a58c6f87345b08e6099876c840cc928757e8706f"} Jan 26 20:15:02 crc kubenswrapper[4787]: I0126 20:15:02.361720 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" podStartSLOduration=2.36170328 podStartE2EDuration="2.36170328s" podCreationTimestamp="2026-01-26 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:15:02.351797317 +0000 UTC m=+9071.058933450" watchObservedRunningTime="2026-01-26 20:15:02.36170328 +0000 UTC m=+9071.068839413" Jan 26 20:15:03 crc kubenswrapper[4787]: I0126 20:15:03.350613 4787 generic.go:334] "Generic (PLEG): container finished" podID="7295ea12-fb51-4ccc-b56b-f811eb29160e" containerID="12af0a8294b30ceb67992b8da3b2d8d6a834d335398b0a74f854a0412974f85e" exitCode=0 Jan 26 20:15:03 crc kubenswrapper[4787]: I0126 20:15:03.350673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" event={"ID":"7295ea12-fb51-4ccc-b56b-f811eb29160e","Type":"ContainerDied","Data":"12af0a8294b30ceb67992b8da3b2d8d6a834d335398b0a74f854a0412974f85e"} Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.770661 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.951104 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7295ea12-fb51-4ccc-b56b-f811eb29160e-secret-volume\") pod \"7295ea12-fb51-4ccc-b56b-f811eb29160e\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.952141 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q6bk\" (UniqueName: \"kubernetes.io/projected/7295ea12-fb51-4ccc-b56b-f811eb29160e-kube-api-access-4q6bk\") pod \"7295ea12-fb51-4ccc-b56b-f811eb29160e\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.952338 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7295ea12-fb51-4ccc-b56b-f811eb29160e-config-volume\") pod \"7295ea12-fb51-4ccc-b56b-f811eb29160e\" (UID: \"7295ea12-fb51-4ccc-b56b-f811eb29160e\") " Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.959540 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7295ea12-fb51-4ccc-b56b-f811eb29160e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7295ea12-fb51-4ccc-b56b-f811eb29160e" (UID: "7295ea12-fb51-4ccc-b56b-f811eb29160e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.982887 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7295ea12-fb51-4ccc-b56b-f811eb29160e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7295ea12-fb51-4ccc-b56b-f811eb29160e" (UID: "7295ea12-fb51-4ccc-b56b-f811eb29160e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:15:04 crc kubenswrapper[4787]: I0126 20:15:04.983133 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7295ea12-fb51-4ccc-b56b-f811eb29160e-kube-api-access-4q6bk" (OuterVolumeSpecName: "kube-api-access-4q6bk") pod "7295ea12-fb51-4ccc-b56b-f811eb29160e" (UID: "7295ea12-fb51-4ccc-b56b-f811eb29160e"). InnerVolumeSpecName "kube-api-access-4q6bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.059910 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q6bk\" (UniqueName: \"kubernetes.io/projected/7295ea12-fb51-4ccc-b56b-f811eb29160e-kube-api-access-4q6bk\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.059957 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7295ea12-fb51-4ccc-b56b-f811eb29160e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.059968 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7295ea12-fb51-4ccc-b56b-f811eb29160e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.375444 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" event={"ID":"7295ea12-fb51-4ccc-b56b-f811eb29160e","Type":"ContainerDied","Data":"adf546fcad6630b6d5fa27e5a58c6f87345b08e6099876c840cc928757e8706f"} Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.375532 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf546fcad6630b6d5fa27e5a58c6f87345b08e6099876c840cc928757e8706f" Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.375630 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sdq48" Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.468937 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q"] Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.482613 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-bhr8q"] Jan 26 20:15:05 crc kubenswrapper[4787]: I0126 20:15:05.606578 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd41c38b-22dc-47b3-8861-c463b95f4201" path="/var/lib/kubelet/pods/cd41c38b-22dc-47b3-8861-c463b95f4201/volumes" Jan 26 20:15:15 crc kubenswrapper[4787]: I0126 20:15:15.590061 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:15:15 crc kubenswrapper[4787]: E0126 20:15:15.590756 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:15:30 crc kubenswrapper[4787]: I0126 20:15:30.591131 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:15:30 crc kubenswrapper[4787]: E0126 20:15:30.592442 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:15:41 crc kubenswrapper[4787]: I0126 20:15:41.606328 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:15:41 crc kubenswrapper[4787]: E0126 20:15:41.607176 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:15:56 crc kubenswrapper[4787]: I0126 20:15:56.590527 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:15:56 crc kubenswrapper[4787]: E0126 20:15:56.591605 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:16:05 crc kubenswrapper[4787]: I0126 20:16:05.175755 4787 scope.go:117] "RemoveContainer" containerID="3c0a5d7401929cd77daf2f526941ede61c8d9e908d7a666c5e7de75b2e980238" Jan 26 20:16:10 crc kubenswrapper[4787]: I0126 20:16:10.590252 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:16:10 crc kubenswrapper[4787]: E0126 20:16:10.591496 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:16:24 crc kubenswrapper[4787]: I0126 20:16:24.589255 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:16:24 crc kubenswrapper[4787]: E0126 20:16:24.590294 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:16:38 crc kubenswrapper[4787]: I0126 20:16:38.589924 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:16:38 crc kubenswrapper[4787]: E0126 20:16:38.591196 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:16:53 crc kubenswrapper[4787]: I0126 20:16:53.590457 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:16:54 crc kubenswrapper[4787]: I0126 20:16:54.861406 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"af13e4b0350755470d2c7505aff4ec237719443e8038715093b471d7db2a0262"} Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.666825 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m54mb"] Jan 26 20:17:26 crc kubenswrapper[4787]: E0126 20:17:26.667905 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7295ea12-fb51-4ccc-b56b-f811eb29160e" containerName="collect-profiles" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.667925 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7295ea12-fb51-4ccc-b56b-f811eb29160e" containerName="collect-profiles" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.668304 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7295ea12-fb51-4ccc-b56b-f811eb29160e" containerName="collect-profiles" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.670672 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.699361 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m54mb"] Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.798890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqmq\" (UniqueName: \"kubernetes.io/projected/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-kube-api-access-zqqmq\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.798996 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-catalog-content\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.799291 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-utilities\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.900732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-utilities\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.900838 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqmq\" (UniqueName: \"kubernetes.io/projected/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-kube-api-access-zqqmq\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.900871 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-catalog-content\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.901220 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-utilities\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.901233 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-catalog-content\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.937466 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqmq\" (UniqueName: \"kubernetes.io/projected/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-kube-api-access-zqqmq\") pod \"redhat-operators-m54mb\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:26 crc kubenswrapper[4787]: I0126 20:17:26.995332 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:27 crc kubenswrapper[4787]: I0126 20:17:27.556753 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m54mb"] Jan 26 20:17:28 crc kubenswrapper[4787]: I0126 20:17:28.262445 4787 generic.go:334] "Generic (PLEG): container finished" podID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerID="09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215" exitCode=0 Jan 26 20:17:28 crc kubenswrapper[4787]: I0126 20:17:28.262587 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerDied","Data":"09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215"} Jan 26 20:17:28 crc kubenswrapper[4787]: I0126 20:17:28.262715 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerStarted","Data":"6718a2b4ec5e788a516b4af2c6d1871407048e02025cfbe2a3aafb8a917f1a55"} Jan 26 20:17:29 crc kubenswrapper[4787]: I0126 20:17:29.273086 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerStarted","Data":"19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365"} Jan 26 20:17:32 crc kubenswrapper[4787]: I0126 20:17:32.311531 4787 generic.go:334] "Generic (PLEG): container finished" podID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerID="19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365" exitCode=0 Jan 26 20:17:32 crc kubenswrapper[4787]: I0126 20:17:32.311622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerDied","Data":"19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365"} Jan 26 20:17:33 crc kubenswrapper[4787]: I0126 20:17:33.325574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerStarted","Data":"45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2"} Jan 26 20:17:33 crc kubenswrapper[4787]: I0126 20:17:33.350735 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m54mb" podStartSLOduration=2.844388792 podStartE2EDuration="7.350711659s" podCreationTimestamp="2026-01-26 20:17:26 +0000 UTC" firstStartedPulling="2026-01-26 20:17:28.264873705 +0000 UTC m=+9216.972009838" lastFinishedPulling="2026-01-26 20:17:32.771196562 +0000 UTC m=+9221.478332705" observedRunningTime="2026-01-26 20:17:33.341244798 +0000 UTC m=+9222.048380931" watchObservedRunningTime="2026-01-26 20:17:33.350711659 +0000 UTC m=+9222.057847802" Jan 26 20:17:36 crc kubenswrapper[4787]: I0126 20:17:36.996427 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:36 crc kubenswrapper[4787]: I0126 20:17:36.996996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:38 crc kubenswrapper[4787]: I0126 20:17:38.065474 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m54mb" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="registry-server" probeResult="failure" output=< Jan 26 20:17:38 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 20:17:38 crc kubenswrapper[4787]: > Jan 26 20:17:47 crc kubenswrapper[4787]: I0126 20:17:47.069545 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:47 crc kubenswrapper[4787]: I0126 20:17:47.141837 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:47 crc kubenswrapper[4787]: I0126 20:17:47.311611 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m54mb"] Jan 26 20:17:48 crc kubenswrapper[4787]: I0126 20:17:48.473838 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m54mb" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="registry-server" containerID="cri-o://45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2" gracePeriod=2 Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.056049 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.214093 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-catalog-content\") pod \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.214273 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqqmq\" (UniqueName: \"kubernetes.io/projected/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-kube-api-access-zqqmq\") pod \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.214349 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-utilities\") pod \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\" (UID: \"4cc0b442-b6d7-4e95-b70b-9c80b5351f81\") " Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.215250 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-utilities" (OuterVolumeSpecName: "utilities") pod "4cc0b442-b6d7-4e95-b70b-9c80b5351f81" (UID: "4cc0b442-b6d7-4e95-b70b-9c80b5351f81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.306487 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-kube-api-access-zqqmq" (OuterVolumeSpecName: "kube-api-access-zqqmq") pod "4cc0b442-b6d7-4e95-b70b-9c80b5351f81" (UID: "4cc0b442-b6d7-4e95-b70b-9c80b5351f81"). InnerVolumeSpecName "kube-api-access-zqqmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.316751 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqqmq\" (UniqueName: \"kubernetes.io/projected/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-kube-api-access-zqqmq\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.316778 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.356650 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cc0b442-b6d7-4e95-b70b-9c80b5351f81" (UID: "4cc0b442-b6d7-4e95-b70b-9c80b5351f81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.418774 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc0b442-b6d7-4e95-b70b-9c80b5351f81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.487170 4787 generic.go:334] "Generic (PLEG): container finished" podID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerID="45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2" exitCode=0 Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.487219 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerDied","Data":"45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2"} Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.487251 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m54mb" event={"ID":"4cc0b442-b6d7-4e95-b70b-9c80b5351f81","Type":"ContainerDied","Data":"6718a2b4ec5e788a516b4af2c6d1871407048e02025cfbe2a3aafb8a917f1a55"} Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.487271 4787 scope.go:117] "RemoveContainer" containerID="45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.487293 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m54mb" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.513770 4787 scope.go:117] "RemoveContainer" containerID="19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.522247 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m54mb"] Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.532127 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m54mb"] Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.548270 4787 scope.go:117] "RemoveContainer" containerID="09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.580280 4787 scope.go:117] "RemoveContainer" containerID="45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2" Jan 26 20:17:49 crc kubenswrapper[4787]: E0126 20:17:49.580782 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2\": container with ID starting with 45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2 not found: ID does not exist" containerID="45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.580849 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2"} err="failed to get container status \"45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2\": rpc error: code = NotFound desc = could not find container \"45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2\": container with ID starting with 45147e766807ec37de6e8080edfa8027bb6e36796d246d00e022ea4f0cd83ae2 not found: ID does not exist" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.580873 4787 scope.go:117] "RemoveContainer" containerID="19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365" Jan 26 20:17:49 crc kubenswrapper[4787]: E0126 20:17:49.581222 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365\": container with ID starting with 19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365 not found: ID does not exist" containerID="19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.581248 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365"} err="failed to get container status \"19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365\": rpc error: code = NotFound desc = could not find container \"19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365\": container with ID starting with 19fd828d087d0ee936882e6e11da1b3ed6729ac3e240d0bb22523bb4d89a9365 not found: ID does not exist" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.581261 4787 scope.go:117] "RemoveContainer" containerID="09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215" Jan 26 20:17:49 crc kubenswrapper[4787]: E0126 20:17:49.581540 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215\": container with ID starting with 09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215 not found: ID does not exist" containerID="09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.581561 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215"} err="failed to get container status \"09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215\": rpc error: code = NotFound desc = could not find container \"09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215\": container with ID starting with 09f8ac3ef155c60f10bf2931a2bd8b995fce57de94dd2cbceccf9649f7341215 not found: ID does not exist" Jan 26 20:17:49 crc kubenswrapper[4787]: I0126 20:17:49.601138 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" path="/var/lib/kubelet/pods/4cc0b442-b6d7-4e95-b70b-9c80b5351f81/volumes" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.927108 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-445vv"] Jan 26 20:19:02 crc kubenswrapper[4787]: E0126 20:19:02.928135 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="extract-utilities" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.928150 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="extract-utilities" Jan 26 20:19:02 crc kubenswrapper[4787]: E0126 20:19:02.928180 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="extract-content" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.928189 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="extract-content" Jan 26 20:19:02 crc kubenswrapper[4787]: E0126 20:19:02.928229 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="registry-server" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.928237 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="registry-server" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.928484 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc0b442-b6d7-4e95-b70b-9c80b5351f81" containerName="registry-server" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.930351 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:02 crc kubenswrapper[4787]: I0126 20:19:02.964487 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-445vv"] Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.035253 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mnp\" (UniqueName: \"kubernetes.io/projected/44124713-caec-4d4a-8044-8043c9b2d8c9-kube-api-access-62mnp\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.035510 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-catalog-content\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.035598 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-utilities\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.137448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62mnp\" (UniqueName: \"kubernetes.io/projected/44124713-caec-4d4a-8044-8043c9b2d8c9-kube-api-access-62mnp\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.137835 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-catalog-content\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.137862 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-utilities\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.138600 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-catalog-content\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.138723 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-utilities\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.169102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62mnp\" (UniqueName: \"kubernetes.io/projected/44124713-caec-4d4a-8044-8043c9b2d8c9-kube-api-access-62mnp\") pod \"certified-operators-445vv\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.293112 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:03 crc kubenswrapper[4787]: I0126 20:19:03.821328 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-445vv"] Jan 26 20:19:04 crc kubenswrapper[4787]: I0126 20:19:04.527241 4787 generic.go:334] "Generic (PLEG): container finished" podID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerID="0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4" exitCode=0 Jan 26 20:19:04 crc kubenswrapper[4787]: I0126 20:19:04.527366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerDied","Data":"0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4"} Jan 26 20:19:04 crc kubenswrapper[4787]: I0126 20:19:04.527503 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerStarted","Data":"ebeaa26822d0efec8f7d65571177298f48e58200de0bb08d58426fa7f324de25"} Jan 26 20:19:04 crc kubenswrapper[4787]: I0126 20:19:04.530236 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:19:06 crc kubenswrapper[4787]: I0126 20:19:06.556041 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerStarted","Data":"ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d"} Jan 26 20:19:07 crc kubenswrapper[4787]: I0126 20:19:07.570421 4787 generic.go:334] "Generic (PLEG): container finished" podID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerID="ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d" exitCode=0 Jan 26 20:19:07 crc kubenswrapper[4787]: I0126 20:19:07.570883 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerDied","Data":"ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d"} Jan 26 20:19:09 crc kubenswrapper[4787]: I0126 20:19:09.605771 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerStarted","Data":"b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587"} Jan 26 20:19:09 crc kubenswrapper[4787]: I0126 20:19:09.627031 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-445vv" podStartSLOduration=4.170006124 podStartE2EDuration="7.627007788s" podCreationTimestamp="2026-01-26 20:19:02 +0000 UTC" firstStartedPulling="2026-01-26 20:19:04.529931264 +0000 UTC m=+9313.237067397" lastFinishedPulling="2026-01-26 20:19:07.986932928 +0000 UTC m=+9316.694069061" observedRunningTime="2026-01-26 20:19:09.624098807 +0000 UTC m=+9318.331234960" watchObservedRunningTime="2026-01-26 20:19:09.627007788 +0000 UTC m=+9318.334143931" Jan 26 20:19:13 crc kubenswrapper[4787]: I0126 20:19:13.294206 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:13 crc kubenswrapper[4787]: I0126 20:19:13.294868 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:13 crc kubenswrapper[4787]: I0126 20:19:13.360588 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:13 crc kubenswrapper[4787]: I0126 20:19:13.699052 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:13 crc kubenswrapper[4787]: I0126 20:19:13.752094 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-445vv"] Jan 26 20:19:15 crc kubenswrapper[4787]: I0126 20:19:15.672205 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-445vv" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="registry-server" containerID="cri-o://b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587" gracePeriod=2 Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.303253 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.452779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62mnp\" (UniqueName: \"kubernetes.io/projected/44124713-caec-4d4a-8044-8043c9b2d8c9-kube-api-access-62mnp\") pod \"44124713-caec-4d4a-8044-8043c9b2d8c9\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.452934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-utilities\") pod \"44124713-caec-4d4a-8044-8043c9b2d8c9\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.453245 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-catalog-content\") pod \"44124713-caec-4d4a-8044-8043c9b2d8c9\" (UID: \"44124713-caec-4d4a-8044-8043c9b2d8c9\") " Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.454533 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-utilities" (OuterVolumeSpecName: "utilities") pod "44124713-caec-4d4a-8044-8043c9b2d8c9" (UID: "44124713-caec-4d4a-8044-8043c9b2d8c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.463172 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44124713-caec-4d4a-8044-8043c9b2d8c9-kube-api-access-62mnp" (OuterVolumeSpecName: "kube-api-access-62mnp") pod "44124713-caec-4d4a-8044-8043c9b2d8c9" (UID: "44124713-caec-4d4a-8044-8043c9b2d8c9"). InnerVolumeSpecName "kube-api-access-62mnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.509816 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44124713-caec-4d4a-8044-8043c9b2d8c9" (UID: "44124713-caec-4d4a-8044-8043c9b2d8c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.555751 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62mnp\" (UniqueName: \"kubernetes.io/projected/44124713-caec-4d4a-8044-8043c9b2d8c9-kube-api-access-62mnp\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.555785 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.555795 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44124713-caec-4d4a-8044-8043c9b2d8c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.690879 4787 generic.go:334] "Generic (PLEG): container finished" podID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerID="b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587" exitCode=0 Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.690989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerDied","Data":"b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587"} Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.691036 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-445vv" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.691058 4787 scope.go:117] "RemoveContainer" containerID="b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.691042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-445vv" event={"ID":"44124713-caec-4d4a-8044-8043c9b2d8c9","Type":"ContainerDied","Data":"ebeaa26822d0efec8f7d65571177298f48e58200de0bb08d58426fa7f324de25"} Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.731219 4787 scope.go:117] "RemoveContainer" containerID="ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.740155 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-445vv"] Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.757770 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-445vv"] Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.775966 4787 scope.go:117] "RemoveContainer" containerID="0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.807914 4787 scope.go:117] "RemoveContainer" containerID="b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.808013 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.808081 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:19:16 crc kubenswrapper[4787]: E0126 20:19:16.808598 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587\": container with ID starting with b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587 not found: ID does not exist" containerID="b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.808657 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587"} err="failed to get container status \"b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587\": rpc error: code = NotFound desc = could not find container \"b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587\": container with ID starting with b460926232715dd1718354758401026c260f2a8a11d09b1f0e8bb489765bc587 not found: ID does not exist" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.808692 4787 scope.go:117] "RemoveContainer" containerID="ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d" Jan 26 20:19:16 crc kubenswrapper[4787]: E0126 20:19:16.809201 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d\": container with ID starting with ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d not found: ID does not exist" containerID="ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.809246 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d"} err="failed to get container status \"ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d\": rpc error: code = NotFound desc = could not find container \"ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d\": container with ID starting with ba4dd70e2c00a62eabd8b86636ddee4b364c928efe1029efa61ed63d26f8196d not found: ID does not exist" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.809274 4787 scope.go:117] "RemoveContainer" containerID="0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4" Jan 26 20:19:16 crc kubenswrapper[4787]: E0126 20:19:16.809721 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4\": container with ID starting with 0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4 not found: ID does not exist" containerID="0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4" Jan 26 20:19:16 crc kubenswrapper[4787]: I0126 20:19:16.809760 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4"} err="failed to get container status \"0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4\": rpc error: code = NotFound desc = could not find container \"0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4\": container with ID starting with 0da1af5f5ab12b0d1e0b10ad723cc02a949f6b132ec024c6fe695d0a895c71c4 not found: ID does not exist" Jan 26 20:19:17 crc kubenswrapper[4787]: I0126 20:19:17.615800 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" path="/var/lib/kubelet/pods/44124713-caec-4d4a-8044-8043c9b2d8c9/volumes" Jan 26 20:19:46 crc kubenswrapper[4787]: I0126 20:19:46.807508 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:19:46 crc kubenswrapper[4787]: I0126 20:19:46.808381 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:20:05 crc kubenswrapper[4787]: I0126 20:20:05.375415 4787 scope.go:117] "RemoveContainer" containerID="0b68f2a95b2245e619d4c5f1fcb27869af105daace29f9ad5ce40fbfc5c015c4" Jan 26 20:20:05 crc kubenswrapper[4787]: I0126 20:20:05.425571 4787 scope.go:117] "RemoveContainer" containerID="236c93aacf7e2829b6cff336cb74bd169220cdb44d0f743233ff4252e0cec99d" Jan 26 20:20:16 crc kubenswrapper[4787]: I0126 20:20:16.807705 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:20:16 crc kubenswrapper[4787]: I0126 20:20:16.808360 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:20:16 crc kubenswrapper[4787]: I0126 20:20:16.808439 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 20:20:16 crc kubenswrapper[4787]: I0126 20:20:16.809573 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af13e4b0350755470d2c7505aff4ec237719443e8038715093b471d7db2a0262"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:20:16 crc kubenswrapper[4787]: I0126 20:20:16.809672 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://af13e4b0350755470d2c7505aff4ec237719443e8038715093b471d7db2a0262" gracePeriod=600 Jan 26 20:20:17 crc kubenswrapper[4787]: I0126 20:20:17.455488 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="af13e4b0350755470d2c7505aff4ec237719443e8038715093b471d7db2a0262" exitCode=0 Jan 26 20:20:17 crc kubenswrapper[4787]: I0126 20:20:17.455535 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"af13e4b0350755470d2c7505aff4ec237719443e8038715093b471d7db2a0262"} Jan 26 20:20:17 crc kubenswrapper[4787]: I0126 20:20:17.455831 4787 scope.go:117] "RemoveContainer" containerID="67cbc6ffd132eb183f56dbe818e7e12915aa31aef6116ab4eb53e17dd7ea3ebb" Jan 26 20:20:18 crc kubenswrapper[4787]: I0126 20:20:18.466468 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd"} Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.941816 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kmdgq"] Jan 26 20:21:16 crc kubenswrapper[4787]: E0126 20:21:16.943187 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="extract-utilities" Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.943211 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="extract-utilities" Jan 26 20:21:16 crc kubenswrapper[4787]: E0126 20:21:16.943253 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="registry-server" Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.943266 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="registry-server" Jan 26 20:21:16 crc kubenswrapper[4787]: E0126 20:21:16.943290 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="extract-content" Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.943304 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="extract-content" Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.943705 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="44124713-caec-4d4a-8044-8043c9b2d8c9" containerName="registry-server" Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.949691 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:16 crc kubenswrapper[4787]: I0126 20:21:16.999611 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmdgq"] Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.089347 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-catalog-content\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.089646 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-utilities\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.089789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sczl7\" (UniqueName: \"kubernetes.io/projected/161b0522-4a50-4b5e-9bcb-44c335153da5-kube-api-access-sczl7\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.194910 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-catalog-content\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.194988 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-utilities\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.195056 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sczl7\" (UniqueName: \"kubernetes.io/projected/161b0522-4a50-4b5e-9bcb-44c335153da5-kube-api-access-sczl7\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.195645 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-catalog-content\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.195811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-utilities\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.228752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sczl7\" (UniqueName: \"kubernetes.io/projected/161b0522-4a50-4b5e-9bcb-44c335153da5-kube-api-access-sczl7\") pod \"redhat-marketplace-kmdgq\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.289176 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:17 crc kubenswrapper[4787]: I0126 20:21:17.791369 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmdgq"] Jan 26 20:21:19 crc kubenswrapper[4787]: I0126 20:21:19.232077 4787 generic.go:334] "Generic (PLEG): container finished" podID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerID="180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e" exitCode=0 Jan 26 20:21:19 crc kubenswrapper[4787]: I0126 20:21:19.232278 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmdgq" event={"ID":"161b0522-4a50-4b5e-9bcb-44c335153da5","Type":"ContainerDied","Data":"180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e"} Jan 26 20:21:19 crc kubenswrapper[4787]: I0126 20:21:19.233007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmdgq" event={"ID":"161b0522-4a50-4b5e-9bcb-44c335153da5","Type":"ContainerStarted","Data":"c1edb9f0f169d51181a8b0a1bb0d7576b2186d1d046e688fa7f3cd6738fb3eee"} Jan 26 20:21:21 crc kubenswrapper[4787]: I0126 20:21:21.258390 4787 generic.go:334] "Generic (PLEG): container finished" podID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerID="333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4" exitCode=0 Jan 26 20:21:21 crc kubenswrapper[4787]: I0126 20:21:21.258475 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmdgq" event={"ID":"161b0522-4a50-4b5e-9bcb-44c335153da5","Type":"ContainerDied","Data":"333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4"} Jan 26 20:21:22 crc kubenswrapper[4787]: I0126 20:21:22.271973 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmdgq" event={"ID":"161b0522-4a50-4b5e-9bcb-44c335153da5","Type":"ContainerStarted","Data":"29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43"} Jan 26 20:21:22 crc kubenswrapper[4787]: I0126 20:21:22.294771 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kmdgq" podStartSLOduration=3.673710724 podStartE2EDuration="6.29475121s" podCreationTimestamp="2026-01-26 20:21:16 +0000 UTC" firstStartedPulling="2026-01-26 20:21:19.235101938 +0000 UTC m=+9447.942238111" lastFinishedPulling="2026-01-26 20:21:21.856142454 +0000 UTC m=+9450.563278597" observedRunningTime="2026-01-26 20:21:22.292583587 +0000 UTC m=+9450.999719720" watchObservedRunningTime="2026-01-26 20:21:22.29475121 +0000 UTC m=+9451.001887343" Jan 26 20:21:27 crc kubenswrapper[4787]: I0126 20:21:27.291032 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:27 crc kubenswrapper[4787]: I0126 20:21:27.291669 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:27 crc kubenswrapper[4787]: I0126 20:21:27.357006 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:27 crc kubenswrapper[4787]: I0126 20:21:27.407830 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:27 crc kubenswrapper[4787]: I0126 20:21:27.622554 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmdgq"] Jan 26 20:21:29 crc kubenswrapper[4787]: I0126 20:21:29.357522 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kmdgq" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="registry-server" containerID="cri-o://29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43" gracePeriod=2 Jan 26 20:21:29 crc kubenswrapper[4787]: I0126 20:21:29.949066 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.012449 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sczl7\" (UniqueName: \"kubernetes.io/projected/161b0522-4a50-4b5e-9bcb-44c335153da5-kube-api-access-sczl7\") pod \"161b0522-4a50-4b5e-9bcb-44c335153da5\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.012640 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-catalog-content\") pod \"161b0522-4a50-4b5e-9bcb-44c335153da5\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.013229 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-utilities\") pod \"161b0522-4a50-4b5e-9bcb-44c335153da5\" (UID: \"161b0522-4a50-4b5e-9bcb-44c335153da5\") " Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.014172 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-utilities" (OuterVolumeSpecName: "utilities") pod "161b0522-4a50-4b5e-9bcb-44c335153da5" (UID: "161b0522-4a50-4b5e-9bcb-44c335153da5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.022622 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161b0522-4a50-4b5e-9bcb-44c335153da5-kube-api-access-sczl7" (OuterVolumeSpecName: "kube-api-access-sczl7") pod "161b0522-4a50-4b5e-9bcb-44c335153da5" (UID: "161b0522-4a50-4b5e-9bcb-44c335153da5"). InnerVolumeSpecName "kube-api-access-sczl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.041131 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "161b0522-4a50-4b5e-9bcb-44c335153da5" (UID: "161b0522-4a50-4b5e-9bcb-44c335153da5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.116205 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.116237 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sczl7\" (UniqueName: \"kubernetes.io/projected/161b0522-4a50-4b5e-9bcb-44c335153da5-kube-api-access-sczl7\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.116247 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161b0522-4a50-4b5e-9bcb-44c335153da5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.366902 4787 generic.go:334] "Generic (PLEG): container finished" podID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerID="29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43" exitCode=0 Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.366965 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmdgq" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.366969 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmdgq" event={"ID":"161b0522-4a50-4b5e-9bcb-44c335153da5","Type":"ContainerDied","Data":"29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43"} Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.367008 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmdgq" event={"ID":"161b0522-4a50-4b5e-9bcb-44c335153da5","Type":"ContainerDied","Data":"c1edb9f0f169d51181a8b0a1bb0d7576b2186d1d046e688fa7f3cd6738fb3eee"} Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.367031 4787 scope.go:117] "RemoveContainer" containerID="29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.388931 4787 scope.go:117] "RemoveContainer" containerID="333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.418857 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmdgq"] Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.436934 4787 scope.go:117] "RemoveContainer" containerID="180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.437522 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmdgq"] Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.498804 4787 scope.go:117] "RemoveContainer" containerID="29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43" Jan 26 20:21:30 crc kubenswrapper[4787]: E0126 20:21:30.499307 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43\": container with ID starting with 29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43 not found: ID does not exist" containerID="29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.499349 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43"} err="failed to get container status \"29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43\": rpc error: code = NotFound desc = could not find container \"29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43\": container with ID starting with 29873a09b5976bbdaafb2da11a22a2a95cdf3342868fda301b2c5bd3bdbb6f43 not found: ID does not exist" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.499383 4787 scope.go:117] "RemoveContainer" containerID="333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4" Jan 26 20:21:30 crc kubenswrapper[4787]: E0126 20:21:30.499868 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4\": container with ID starting with 333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4 not found: ID does not exist" containerID="333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.499911 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4"} err="failed to get container status \"333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4\": rpc error: code = NotFound desc = could not find container \"333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4\": container with ID starting with 333c5c78bce239df2bd62247f1b5360e9a71c850af733ebdce7c67f189caf9e4 not found: ID does not exist" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.499942 4787 scope.go:117] "RemoveContainer" containerID="180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e" Jan 26 20:21:30 crc kubenswrapper[4787]: E0126 20:21:30.500268 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e\": container with ID starting with 180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e not found: ID does not exist" containerID="180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e" Jan 26 20:21:30 crc kubenswrapper[4787]: I0126 20:21:30.500297 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e"} err="failed to get container status \"180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e\": rpc error: code = NotFound desc = could not find container \"180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e\": container with ID starting with 180a3ca989b78b415ca95f77891a82a21770c769159ed50326e21d97ead4326e not found: ID does not exist" Jan 26 20:21:31 crc kubenswrapper[4787]: I0126 20:21:31.611216 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" path="/var/lib/kubelet/pods/161b0522-4a50-4b5e-9bcb-44c335153da5/volumes" Jan 26 20:22:18 crc kubenswrapper[4787]: I0126 20:22:18.291144 4787 trace.go:236] Trace[1995986951]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (26-Jan-2026 20:22:17.186) (total time: 1104ms): Jan 26 20:22:18 crc kubenswrapper[4787]: Trace[1995986951]: [1.104047904s] [1.104047904s] END Jan 26 20:22:46 crc kubenswrapper[4787]: I0126 20:22:46.808446 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:22:46 crc kubenswrapper[4787]: I0126 20:22:46.809006 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:23:16 crc kubenswrapper[4787]: I0126 20:23:16.808304 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:23:16 crc kubenswrapper[4787]: I0126 20:23:16.808762 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.202428 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9edb1909-c634-429b-b9cd-dc59167c9850/init-config-reloader/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.359237 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9edb1909-c634-429b-b9cd-dc59167c9850/alertmanager/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.374245 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9edb1909-c634-429b-b9cd-dc59167c9850/init-config-reloader/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.396748 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9edb1909-c634-429b-b9cd-dc59167c9850/config-reloader/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.586857 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_fd637985-9b74-4b96-a04f-b197c9264c9b/aodh-evaluator/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.610292 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_fd637985-9b74-4b96-a04f-b197c9264c9b/aodh-api/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.657312 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_fd637985-9b74-4b96-a04f-b197c9264c9b/aodh-listener/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.662062 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_fd637985-9b74-4b96-a04f-b197c9264c9b/aodh-notifier/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.802668 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dd98956c6-86mmx_e19143c4-6c81-4684-8712-ba99d98ba256/barbican-api/0.log" Jan 26 20:23:38 crc kubenswrapper[4787]: I0126 20:23:38.851346 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dd98956c6-86mmx_e19143c4-6c81-4684-8712-ba99d98ba256/barbican-api-log/0.log" Jan 26 20:23:39 crc kubenswrapper[4787]: I0126 20:23:39.206609 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6997547fdb-fnvb9_0e794d83-0f4e-4111-8873-23376c85c1d8/barbican-keystone-listener/0.log" Jan 26 20:23:39 crc kubenswrapper[4787]: I0126 20:23:39.311216 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6997547fdb-fnvb9_0e794d83-0f4e-4111-8873-23376c85c1d8/barbican-keystone-listener-log/0.log" Jan 26 20:23:39 crc kubenswrapper[4787]: I0126 20:23:39.379426 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b54957c57-vn9rc_dfb9e125-ffc8-4211-b147-04adec3df7ac/barbican-worker/0.log" Jan 26 20:23:39 crc kubenswrapper[4787]: I0126 20:23:39.436508 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b54957c57-vn9rc_dfb9e125-ffc8-4211-b147-04adec3df7ac/barbican-worker-log/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.127057 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7354e50d-99c1-4807-887d-5debe519ff46/ceilometer-central-agent/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.134405 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-v6ng2_68827c36-2f69-4ec0-a472-29afc9bb73ce/bootstrap-openstack-openstack-cell1/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.185886 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7354e50d-99c1-4807-887d-5debe519ff46/ceilometer-notification-agent/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.349601 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7354e50d-99c1-4807-887d-5debe519ff46/proxy-httpd/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.361104 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7354e50d-99c1-4807-887d-5debe519ff46/sg-core/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.464696 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-tvgbs_cd027093-fbd5-4f2a-897d-cdc67a88f7be/ceph-client-openstack-openstack-cell1/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.673509 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0dfa3018-6648-4c71-8640-3c888b057c57/cinder-api/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.720318 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0dfa3018-6648-4c71-8640-3c888b057c57/cinder-api-log/0.log" Jan 26 20:23:40 crc kubenswrapper[4787]: I0126 20:23:40.943587 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_69ef81f2-db43-4004-abdc-c34eacc8a2ae/probe/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.000143 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_69ef81f2-db43-4004-abdc-c34eacc8a2ae/cinder-backup/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.016825 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d34a51b6-302e-47bf-8a31-56019455d91f/cinder-scheduler/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.180497 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d34a51b6-302e-47bf-8a31-56019455d91f/probe/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.219583 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_3680052c-4070-45ac-8697-4e2050a95201/cinder-volume/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.288563 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_3680052c-4070-45ac-8697-4e2050a95201/probe/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.446329 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-8d7vx_afa3f89e-2ba2-46b5-b87b-7d572971a173/configure-network-openstack-openstack-cell1/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.552961 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-t6gm5_fa9d3cb7-a099-4c24-aa56-3f74900d35fd/configure-os-openstack-openstack-cell1/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.635815 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5b94478455-fqg6h_58461a7b-bde1-45cf-81e3-0dae1ce65e7c/init/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.824683 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5b94478455-fqg6h_58461a7b-bde1-45cf-81e3-0dae1ce65e7c/init/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.908476 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5b94478455-fqg6h_58461a7b-bde1-45cf-81e3-0dae1ce65e7c/dnsmasq-dns/0.log" Jan 26 20:23:41 crc kubenswrapper[4787]: I0126 20:23:41.984465 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-v7456_8a7b5882-d776-47f7-a895-ff7728795475/download-cache-openstack-openstack-cell1/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.059186 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c4976056-8e89-4116-a348-937ae6765893/glance-httpd/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.088187 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c4976056-8e89-4116-a348-937ae6765893/glance-log/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.217619 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7b22e06a-773a-4fc9-891b-631713f4de49/glance-httpd/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.253679 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7b22e06a-773a-4fc9-891b-631713f4de49/glance-log/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.409397 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7f84c7746-b2flw_6cfd1ead-fea5-4f01-b7a8-944f010495ad/heat-api/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.545969 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5b98f465d7-qqj9m_5dc2cf8e-097c-4099-a370-39e5a69eb862/heat-cfnapi/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.563450 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5765f6b7db-rc55s_0f7a71ea-1ae4-4f9b-b9dc-87c1671738a4/heat-engine/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.783857 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b5b7ff8f-5f4tv_f1fed33f-a577-4398-bd7f-dc9231312768/horizon/0.log" Jan 26 20:23:42 crc kubenswrapper[4787]: I0126 20:23:42.925764 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-h7vmn_bd66a7e4-b67a-4d04-ac6d-b0640cdc592c/install-certs-openstack-openstack-cell1/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.009900 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b5b7ff8f-5f4tv_f1fed33f-a577-4398-bd7f-dc9231312768/horizon-log/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.098284 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-dt69h_fcb3f748-1ccd-4119-b32b-6cab1a5ef232/install-os-openstack-openstack-cell1/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.268644 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490961-2c4cr_3245f462-9672-43fa-9589-1eaf84a33fa7/keystone-cron/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.326328 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c9df67c87-rtkk8_eb9964c8-28d2-4c55-98ca-0cda083cf39d/keystone-api/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.405650 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_67828f47-7882-45f4-bec7-4c0de16894a4/kube-state-metrics/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.510585 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-r4h94_d06cd16e-f936-4443-8d39-d23a3f5a3a99/libvirt-openstack-openstack-cell1/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.642304 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_71cae724-1a8b-41b8-a3a9-eb3e70af9024/manila-api-log/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.776899 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_f49effb7-5427-4a7b-ba48-2137cdcddbe8/manila-scheduler/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.826042 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_71cae724-1a8b-41b8-a3a9-eb3e70af9024/manila-api/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.836490 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_f49effb7-5427-4a7b-ba48-2137cdcddbe8/probe/0.log" Jan 26 20:23:43 crc kubenswrapper[4787]: I0126 20:23:43.948713 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a/manila-share/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.024882 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_e8c10fc5-ce29-47ce-84a2-b78f7aa94b6a/probe/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.274640 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7986b9755c-vhh5z_b41501cd-e353-4164-bd99-54a54d17c041/neutron-api/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.319898 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7986b9755c-vhh5z_b41501cd-e353-4164-bd99-54a54d17c041/neutron-httpd/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.412446 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-zbwbm_69b66133-0df1-4b99-b902-4197ea7b9bb7/neutron-dhcp-openstack-openstack-cell1/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.625607 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-v7w5x_435fa74c-76cb-40b5-b78d-479aeed03ddd/neutron-sriov-openstack-openstack-cell1/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.642162 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-chcs8_6cbdba7d-a110-4446-a5c1-071da22f49fd/neutron-metadata-openstack-openstack-cell1/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.985732 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c87ddc92-6046-48e8-91a9-5bfd2bc991c5/nova-api-log/0.log" Jan 26 20:23:44 crc kubenswrapper[4787]: I0126 20:23:44.987345 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c87ddc92-6046-48e8-91a9-5bfd2bc991c5/nova-api-api/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.157064 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3c3fab93-c7b9-466b-a961-7b1f04d5e0e0/nova-cell0-conductor-conductor/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.325892 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6fafda68-5f85-4fbd-920d-372205b93018/nova-cell1-conductor-conductor/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.426984 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2834a5ab-22aa-4af2-b7b1-67a35657f0a8/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.559533 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrf57s_c3f02fe9-4c51-4bfb-8b84-a673b23ad0ca/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.680825 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-hpfbg_83413341-3b4e-483b-9e6e-af2c64428fb1/nova-cell1-openstack-openstack-cell1/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.913555 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a1647dea-71d2-484b-89db-3d610a57e3fc/nova-metadata-metadata/0.log" Jan 26 20:23:45 crc kubenswrapper[4787]: I0126 20:23:45.996044 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a1647dea-71d2-484b-89db-3d610a57e3fc/nova-metadata-log/0.log" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.379736 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4884b53e-5d30-42ca-96d8-d72088dbc449/nova-scheduler-scheduler/0.log" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.399396 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7bfb695c56-2zk57_fc21cbde-c282-4e55-80f9-3c12ded80c02/init/0.log" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.581233 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7bfb695c56-2zk57_fc21cbde-c282-4e55-80f9-3c12ded80c02/init/0.log" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.734860 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7bfb695c56-2zk57_fc21cbde-c282-4e55-80f9-3c12ded80c02/octavia-api-provider-agent/0.log" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.804633 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7bfb695c56-2zk57_fc21cbde-c282-4e55-80f9-3c12ded80c02/octavia-api/0.log" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.808174 4787 patch_prober.go:28] interesting pod/machine-config-daemon-6x4t8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.808231 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.808285 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.809313 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd"} pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.809382 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerName="machine-config-daemon" containerID="cri-o://dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" gracePeriod=600 Jan 26 20:23:46 crc kubenswrapper[4787]: E0126 20:23:46.937682 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:23:46 crc kubenswrapper[4787]: I0126 20:23:46.944232 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-ffs52_8577bfe8-3f32-4374-8234-be0dd1530414/init/0.log" Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.088129 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-ffs52_8577bfe8-3f32-4374-8234-be0dd1530414/init/0.log" Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.093440 4787 generic.go:334] "Generic (PLEG): container finished" podID="418f020a-c193-4323-a29a-59c3ad0f1d35" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" exitCode=0 Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.093472 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerDied","Data":"dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd"} Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.093500 4787 scope.go:117] "RemoveContainer" containerID="af13e4b0350755470d2c7505aff4ec237719443e8038715093b471d7db2a0262" Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.094197 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:23:47 crc kubenswrapper[4787]: E0126 20:23:47.094414 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.498750 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-gj49t_33a7c869-71e5-4f17-9a09-ad53a1f02519/init/0.log" Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.598561 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-ffs52_8577bfe8-3f32-4374-8234-be0dd1530414/octavia-healthmanager/0.log" Jan 26 20:23:47 crc kubenswrapper[4787]: I0126 20:23:47.973097 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-gj49t_33a7c869-71e5-4f17-9a09-ad53a1f02519/init/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.003275 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-gj49t_33a7c869-71e5-4f17-9a09-ad53a1f02519/octavia-housekeeping/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.037803 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-7b97d6bc64-dnb2k_596da395-e80c-4fbe-bd79-00dbcb170095/init/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.224236 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-7b97d6bc64-dnb2k_596da395-e80c-4fbe-bd79-00dbcb170095/init/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.226803 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-7b97d6bc64-dnb2k_596da395-e80c-4fbe-bd79-00dbcb170095/octavia-amphora-httpd/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.290442 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-k8plr_578a3da9-d799-446d-ae5e-41ab628669b9/init/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.565128 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vhgdv_e82ebb82-ace0-4d7f-8535-3533fa78f9d2/init/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.606129 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-k8plr_578a3da9-d799-446d-ae5e-41ab628669b9/init/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.632981 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-k8plr_578a3da9-d799-446d-ae5e-41ab628669b9/octavia-rsyslog/0.log" Jan 26 20:23:48 crc kubenswrapper[4787]: I0126 20:23:48.810497 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vhgdv_e82ebb82-ace0-4d7f-8535-3533fa78f9d2/init/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.006488 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8ae41cad-1186-47da-b18b-35613fd332c2/mysql-bootstrap/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.040435 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vhgdv_e82ebb82-ace0-4d7f-8535-3533fa78f9d2/octavia-worker/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.632894 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f654521f-66fa-4cb5-b058-bfdd66311d5c/mysql-bootstrap/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.642933 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8ae41cad-1186-47da-b18b-35613fd332c2/galera/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.715370 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8ae41cad-1186-47da-b18b-35613fd332c2/mysql-bootstrap/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.899433 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f654521f-66fa-4cb5-b058-bfdd66311d5c/mysql-bootstrap/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.959732 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9e0d751f-3670-4d36-8836-2b4812a78127/openstackclient/0.log" Jan 26 20:23:49 crc kubenswrapper[4787]: I0126 20:23:49.982586 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f654521f-66fa-4cb5-b058-bfdd66311d5c/galera/0.log" Jan 26 20:23:50 crc kubenswrapper[4787]: I0126 20:23:50.227365 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7k54j_2a3faefb-09b0-40ca-b548-c8b3546778ee/openstack-network-exporter/0.log" Jan 26 20:23:50 crc kubenswrapper[4787]: I0126 20:23:50.231681 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nnjf5_105e0e3a-23ba-431e-8736-ffce799cf8f2/ovn-controller/0.log" Jan 26 20:23:50 crc kubenswrapper[4787]: I0126 20:23:50.518610 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nlbm_7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14/ovsdb-server-init/0.log" Jan 26 20:23:50 crc kubenswrapper[4787]: I0126 20:23:50.816715 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nlbm_7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14/ovs-vswitchd/0.log" Jan 26 20:23:50 crc kubenswrapper[4787]: I0126 20:23:50.873150 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nlbm_7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14/ovsdb-server/0.log" Jan 26 20:23:50 crc kubenswrapper[4787]: I0126 20:23:50.915090 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4nlbm_7ac1ad50-9bfa-4569-b68e-d1ef0deb1f14/ovsdb-server-init/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.035117 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5d3d8e05-de0b-4435-b921-76ebd8bf99c9/openstack-network-exporter/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.059137 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5d3d8e05-de0b-4435-b921-76ebd8bf99c9/ovn-northd/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.206436 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-2sq4b_20232ae2-355a-48d9-87f8-8132caa1fff6/ovn-openstack-openstack-cell1/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.285165 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d6fdcdea-726a-4606-adec-82e9dbf50e97/openstack-network-exporter/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.446985 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d6fdcdea-726a-4606-adec-82e9dbf50e97/ovsdbserver-nb/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.504976 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48ab69d9-de2a-4b50-9b0a-cb9c3f8975df/openstack-network-exporter/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.571783 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48ab69d9-de2a-4b50-9b0a-cb9c3f8975df/ovsdbserver-nb/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.722082 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_07252574-f41c-451b-a7c2-1dd0c52dc509/openstack-network-exporter/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.725804 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_07252574-f41c-451b-a7c2-1dd0c52dc509/ovsdbserver-nb/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.947171 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2936574c-79b1-4311-92b6-3f9c430851fe/openstack-network-exporter/0.log" Jan 26 20:23:51 crc kubenswrapper[4787]: I0126 20:23:51.986921 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2936574c-79b1-4311-92b6-3f9c430851fe/ovsdbserver-sb/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.176936 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_0d9316b7-843a-46b9-ade9-1fb19a748269/openstack-network-exporter/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.234826 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_0d9316b7-843a-46b9-ade9-1fb19a748269/ovsdbserver-sb/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.312939 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8757dfc6-75dd-4355-9533-c78a28f42aff/openstack-network-exporter/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.397799 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8757dfc6-75dd-4355-9533-c78a28f42aff/ovsdbserver-sb/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.555827 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d5ff996bd-vhgds_737ad08a-5871-45a8-b16f-085d03fbaba4/placement-api/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.599495 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d5ff996bd-vhgds_737ad08a-5871-45a8-b16f-085d03fbaba4/placement-log/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.729220 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-crmx64_716536ef-6d7f-4cc7-8e5b-5cc361d89e85/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 26 20:23:52 crc kubenswrapper[4787]: I0126 20:23:52.909464 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7c3b3fd-be4f-4013-9590-9b5640e0b224/init-config-reloader/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.349722 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_76c766a7-ec1b-4399-a988-70fa15711c4d/memcached/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.421219 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7c3b3fd-be4f-4013-9590-9b5640e0b224/init-config-reloader/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.438563 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7c3b3fd-be4f-4013-9590-9b5640e0b224/config-reloader/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.451778 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7c3b3fd-be4f-4013-9590-9b5640e0b224/prometheus/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.476190 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c7c3b3fd-be4f-4013-9590-9b5640e0b224/thanos-sidecar/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.641903 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2f7e7278-7c9b-4123-9866-dd61b2dcb23f/setup-container/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.809197 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2f7e7278-7c9b-4123-9866-dd61b2dcb23f/setup-container/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.872800 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2f7e7278-7c9b-4123-9866-dd61b2dcb23f/rabbitmq/0.log" Jan 26 20:23:53 crc kubenswrapper[4787]: I0126 20:23:53.915405 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a09fe28b-c9a5-46b1-a327-c9f4eac2036f/setup-container/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.122940 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a09fe28b-c9a5-46b1-a327-c9f4eac2036f/rabbitmq/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.198198 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-vgzj2_521ae3b0-3a63-442a-885c-09a689d344d9/reboot-os-openstack-openstack-cell1/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.230676 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a09fe28b-c9a5-46b1-a327-c9f4eac2036f/setup-container/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.352745 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-84685_c7311cf8-1525-446e-9902-3468838d7968/run-os-openstack-openstack-cell1/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.584171 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-nb2mz_bb23c8e5-45c8-466b-8ab8-0350d707ee6b/ssh-known-hosts-openstack/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.672190 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-wbcsn_529c5af2-a36e-4826-88a7-0deed8d59e7c/telemetry-openstack-openstack-cell1/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.826726 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-wgs22_c86490c4-11c2-4a38-9de0-cdb076526ea1/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 26 20:23:54 crc kubenswrapper[4787]: I0126 20:23:54.953522 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-r6fvq_172ce100-893e-420a-bb76-beda7ab879db/validate-network-openstack-openstack-cell1/0.log" Jan 26 20:23:59 crc kubenswrapper[4787]: I0126 20:23:59.589077 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:23:59 crc kubenswrapper[4787]: E0126 20:23:59.590791 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:24:13 crc kubenswrapper[4787]: I0126 20:24:13.589188 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:24:13 crc kubenswrapper[4787]: E0126 20:24:13.589972 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:24:20 crc kubenswrapper[4787]: I0126 20:24:20.518408 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-w7bvz_a72edcac-00ca-46a7-ab30-551c750eb2cd/manager/0.log" Jan 26 20:24:20 crc kubenswrapper[4787]: I0126 20:24:20.587467 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/util/0.log" Jan 26 20:24:20 crc kubenswrapper[4787]: I0126 20:24:20.840464 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/pull/0.log" Jan 26 20:24:20 crc kubenswrapper[4787]: I0126 20:24:20.846600 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/pull/0.log" Jan 26 20:24:20 crc kubenswrapper[4787]: I0126 20:24:20.848835 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/util/0.log" Jan 26 20:24:21 crc kubenswrapper[4787]: I0126 20:24:21.758534 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/extract/0.log" Jan 26 20:24:21 crc kubenswrapper[4787]: I0126 20:24:21.758735 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/pull/0.log" Jan 26 20:24:21 crc kubenswrapper[4787]: I0126 20:24:21.799985 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bc5eaa22304980238d8dced797e82f3b9eb00ffe7f126b6a4e3c0536f22gc9p_a27fe4be-d228-472f-9b3f-f6e7a938e264/util/0.log" Jan 26 20:24:22 crc kubenswrapper[4787]: I0126 20:24:22.040632 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ggtn5_3923acf7-e06a-4351-84aa-7def61b4ca71/manager/0.log" Jan 26 20:24:22 crc kubenswrapper[4787]: I0126 20:24:22.053529 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-9n2vm_6e4131c6-1507-4b15-92b7-29a2fe7f3775/manager/0.log" Jan 26 20:24:22 crc kubenswrapper[4787]: I0126 20:24:22.315492 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-nvpxf_62502932-23fe-4a77-a89a-26fd15f0f44f/manager/0.log" Jan 26 20:24:22 crc kubenswrapper[4787]: I0126 20:24:22.340547 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-glflp_c5f0e34d-3e5d-458a-a560-08769cb30849/manager/0.log" Jan 26 20:24:22 crc kubenswrapper[4787]: I0126 20:24:22.495840 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-twprk_a42402ec-c6f5-4be4-b649-1bfb41ebf1b0/manager/0.log" Jan 26 20:24:22 crc kubenswrapper[4787]: I0126 20:24:22.724012 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-gfrbb_87ebc1f2-80bd-46db-a605-c3667e656f5b/manager/0.log" Jan 26 20:24:23 crc kubenswrapper[4787]: I0126 20:24:23.130795 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-m9g4s_8fe5e013-7524-461a-9fae-0867594144d5/manager/0.log" Jan 26 20:24:23 crc kubenswrapper[4787]: I0126 20:24:23.536427 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-tcj6b_b0561e24-5692-4790-9cbd-3a74a8c3ce69/manager/0.log" Jan 26 20:24:23 crc kubenswrapper[4787]: I0126 20:24:23.612747 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bswjx_01d314a4-2f86-4cc3-ac94-7a09b363a05d/manager/0.log" Jan 26 20:24:23 crc kubenswrapper[4787]: I0126 20:24:23.823723 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-xqpw8_52a2ecbf-eca4-447b-a516-e8e71194c5ff/manager/0.log" Jan 26 20:24:23 crc kubenswrapper[4787]: I0126 20:24:23.869430 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-4v6fm_c3415733-55e0-4c4f-8bb6-0663ddf67633/manager/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.139051 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-vqc5p_8f756ec3-7e2e-4c11-8b8b-50b50ccc6ad3/manager/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.279919 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-zhtc5_ffe1c0cc-9aaf-4d6d-811b-2fd4e17d7ce7/manager/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.295300 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-dbc4d85f5-z7wm5_70abdb24-0f0e-477a-8c22-7a01f73c05f2/manager/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.434207 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6c56cb8cbf-24hlc_7a37d9dd-cbf7-4a37-980b-c7e6a455703e/operator/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.802822 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-trx4l_3ae4ae1f-6a81-42d2-b2d6-4087d3be5d59/manager/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.893033 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9zpn7_dc3edcd9-ded9-49ab-bed4-90a69169cf3f/registry-server/0.log" Jan 26 20:24:24 crc kubenswrapper[4787]: I0126 20:24:24.985279 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-4wd75_483c1dd7-425f-43b5-a848-efdc2d9899d0/manager/0.log" Jan 26 20:24:25 crc kubenswrapper[4787]: I0126 20:24:25.116532 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-n5tjf_77de585e-c649-4a8e-82e5-fea5379cac6d/operator/0.log" Jan 26 20:24:25 crc kubenswrapper[4787]: I0126 20:24:25.237563 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6k5bf_4174fb8a-905f-4d3a-9dbc-5212b68319f2/manager/0.log" Jan 26 20:24:25 crc kubenswrapper[4787]: I0126 20:24:25.484719 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-clf9c_5fd3204c-f4d7-466e-94b4-8463575086be/manager/0.log" Jan 26 20:24:25 crc kubenswrapper[4787]: I0126 20:24:25.511637 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-clf9c_5fd3204c-f4d7-466e-94b4-8463575086be/manager/1.log" Jan 26 20:24:25 crc kubenswrapper[4787]: I0126 20:24:25.557071 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-x8ql6_2d7744e1-c01e-4dd9-87f2-7aa6695c2d60/manager/0.log" Jan 26 20:24:25 crc kubenswrapper[4787]: I0126 20:24:25.716005 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-fmkqf_302ec2db-963c-44d7-941e-51471e7ae3bb/manager/0.log" Jan 26 20:24:26 crc kubenswrapper[4787]: I0126 20:24:26.569744 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b585d977c-4h4sg_b2e37e1d-9342-4006-9626-273178d301b0/manager/0.log" Jan 26 20:24:27 crc kubenswrapper[4787]: I0126 20:24:27.589078 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:24:27 crc kubenswrapper[4787]: E0126 20:24:27.589383 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:24:41 crc kubenswrapper[4787]: I0126 20:24:41.598450 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:24:41 crc kubenswrapper[4787]: E0126 20:24:41.599520 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:24:49 crc kubenswrapper[4787]: I0126 20:24:49.046093 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5bm7j_c488c5cb-fbf3-4ca4-9ef7-3e171e36b302/control-plane-machine-set-operator/0.log" Jan 26 20:24:49 crc kubenswrapper[4787]: I0126 20:24:49.291459 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxjzt_1b873062-3dbd-40cb-92f9-cc0fbfd98f2b/kube-rbac-proxy/0.log" Jan 26 20:24:49 crc kubenswrapper[4787]: I0126 20:24:49.325641 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxjzt_1b873062-3dbd-40cb-92f9-cc0fbfd98f2b/machine-api-operator/0.log" Jan 26 20:24:55 crc kubenswrapper[4787]: I0126 20:24:55.590088 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:24:55 crc kubenswrapper[4787]: E0126 20:24:55.591254 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:25:05 crc kubenswrapper[4787]: I0126 20:25:05.579798 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-mlsxn_e1c7699f-430a-4049-ad92-19240c15d2ba/cert-manager-controller/0.log" Jan 26 20:25:05 crc kubenswrapper[4787]: I0126 20:25:05.688070 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-l8878_9514fbda-b64d-456b-afdc-400a97c84beb/cert-manager-cainjector/0.log" Jan 26 20:25:05 crc kubenswrapper[4787]: I0126 20:25:05.763055 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zds8t_377a7b82-6a6f-4377-b05e-70fddbedaa1e/cert-manager-webhook/0.log" Jan 26 20:25:09 crc kubenswrapper[4787]: I0126 20:25:09.589975 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:25:09 crc kubenswrapper[4787]: E0126 20:25:09.591126 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:25:20 crc kubenswrapper[4787]: I0126 20:25:20.589534 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:25:20 crc kubenswrapper[4787]: E0126 20:25:20.590645 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.376909 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-fcdck_e7890c8e-6fb8-42b4-a953-d5dfac5ed67a/nmstate-console-plugin/0.log" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.415379 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5tsdc"] Jan 26 20:25:21 crc kubenswrapper[4787]: E0126 20:25:21.415965 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="extract-utilities" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.415984 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="extract-utilities" Jan 26 20:25:21 crc kubenswrapper[4787]: E0126 20:25:21.416005 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="extract-content" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.416014 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="extract-content" Jan 26 20:25:21 crc kubenswrapper[4787]: E0126 20:25:21.416038 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="registry-server" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.416046 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="registry-server" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.416329 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="161b0522-4a50-4b5e-9bcb-44c335153da5" containerName="registry-server" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.418397 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.441383 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tsdc"] Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.455644 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tjd5k_f611b8d8-b794-4b15-bb02-25776ca06b96/nmstate-handler/0.log" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.506377 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-utilities\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.506881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7599\" (UniqueName: \"kubernetes.io/projected/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-kube-api-access-d7599\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.506990 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-catalog-content\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.609104 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7599\" (UniqueName: \"kubernetes.io/projected/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-kube-api-access-d7599\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.609161 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-catalog-content\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.609234 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-utilities\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.609775 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-catalog-content\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.609832 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-utilities\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.627608 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7599\" (UniqueName: \"kubernetes.io/projected/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-kube-api-access-d7599\") pod \"community-operators-5tsdc\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.661838 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rwqxq_e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9/kube-rbac-proxy/0.log" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.759057 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:21 crc kubenswrapper[4787]: I0126 20:25:21.778163 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rwqxq_e48f5af3-d4d5-4e5d-9bfb-ce8ce7e52ab9/nmstate-metrics/0.log" Jan 26 20:25:22 crc kubenswrapper[4787]: I0126 20:25:22.069860 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wbxfd_386546ef-4d1a-47cc-badf-1bff4394dbf3/nmstate-operator/0.log" Jan 26 20:25:22 crc kubenswrapper[4787]: I0126 20:25:22.165344 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-pk9n5_d6d7ea7c-96d8-4ed5-a2e3-b5e8012732c9/nmstate-webhook/0.log" Jan 26 20:25:22 crc kubenswrapper[4787]: I0126 20:25:22.365607 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tsdc"] Jan 26 20:25:22 crc kubenswrapper[4787]: W0126 20:25:22.377475 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fa80c57_f0a4_4fe3_8beb_cbb90126c1b5.slice/crio-91c905374d27e21b8ea942b6ea16c9bc42c5c1866716e5bcda728ef09ba4e4f9 WatchSource:0}: Error finding container 91c905374d27e21b8ea942b6ea16c9bc42c5c1866716e5bcda728ef09ba4e4f9: Status 404 returned error can't find the container with id 91c905374d27e21b8ea942b6ea16c9bc42c5c1866716e5bcda728ef09ba4e4f9 Jan 26 20:25:23 crc kubenswrapper[4787]: I0126 20:25:23.081569 4787 generic.go:334] "Generic (PLEG): container finished" podID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerID="118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647" exitCode=0 Jan 26 20:25:23 crc kubenswrapper[4787]: I0126 20:25:23.081684 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerDied","Data":"118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647"} Jan 26 20:25:23 crc kubenswrapper[4787]: I0126 20:25:23.081823 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerStarted","Data":"91c905374d27e21b8ea942b6ea16c9bc42c5c1866716e5bcda728ef09ba4e4f9"} Jan 26 20:25:23 crc kubenswrapper[4787]: I0126 20:25:23.083464 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:25:24 crc kubenswrapper[4787]: I0126 20:25:24.091674 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerStarted","Data":"37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85"} Jan 26 20:25:25 crc kubenswrapper[4787]: I0126 20:25:25.103926 4787 generic.go:334] "Generic (PLEG): container finished" podID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerID="37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85" exitCode=0 Jan 26 20:25:25 crc kubenswrapper[4787]: I0126 20:25:25.103985 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerDied","Data":"37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85"} Jan 26 20:25:26 crc kubenswrapper[4787]: I0126 20:25:26.116182 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerStarted","Data":"ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f"} Jan 26 20:25:26 crc kubenswrapper[4787]: I0126 20:25:26.142398 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5tsdc" podStartSLOduration=2.699608338 podStartE2EDuration="5.142378221s" podCreationTimestamp="2026-01-26 20:25:21 +0000 UTC" firstStartedPulling="2026-01-26 20:25:23.083241392 +0000 UTC m=+9691.790377525" lastFinishedPulling="2026-01-26 20:25:25.526011275 +0000 UTC m=+9694.233147408" observedRunningTime="2026-01-26 20:25:26.134755754 +0000 UTC m=+9694.841891887" watchObservedRunningTime="2026-01-26 20:25:26.142378221 +0000 UTC m=+9694.849514344" Jan 26 20:25:31 crc kubenswrapper[4787]: I0126 20:25:31.760157 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:31 crc kubenswrapper[4787]: I0126 20:25:31.760792 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:31 crc kubenswrapper[4787]: I0126 20:25:31.778827 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f654521f-66fa-4cb5-b058-bfdd66311d5c" containerName="galera" probeResult="failure" output="command timed out" Jan 26 20:25:31 crc kubenswrapper[4787]: I0126 20:25:31.973504 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:32 crc kubenswrapper[4787]: I0126 20:25:32.346682 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.009352 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5tsdc"] Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.191745 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5tsdc" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="registry-server" containerID="cri-o://ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f" gracePeriod=2 Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.742383 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.790212 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-utilities\") pod \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.790267 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7599\" (UniqueName: \"kubernetes.io/projected/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-kube-api-access-d7599\") pod \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.790419 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-catalog-content\") pod \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\" (UID: \"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5\") " Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.804641 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-utilities" (OuterVolumeSpecName: "utilities") pod "7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" (UID: "7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.824380 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-kube-api-access-d7599" (OuterVolumeSpecName: "kube-api-access-d7599") pod "7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" (UID: "7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5"). InnerVolumeSpecName "kube-api-access-d7599". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.836227 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" (UID: "7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.892318 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.892664 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:25:34 crc kubenswrapper[4787]: I0126 20:25:34.892675 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7599\" (UniqueName: \"kubernetes.io/projected/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5-kube-api-access-d7599\") on node \"crc\" DevicePath \"\"" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.205588 4787 generic.go:334] "Generic (PLEG): container finished" podID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerID="ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f" exitCode=0 Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.205631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerDied","Data":"ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f"} Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.205653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tsdc" event={"ID":"7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5","Type":"ContainerDied","Data":"91c905374d27e21b8ea942b6ea16c9bc42c5c1866716e5bcda728ef09ba4e4f9"} Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.205669 4787 scope.go:117] "RemoveContainer" containerID="ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.205777 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tsdc" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.248282 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5tsdc"] Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.251534 4787 scope.go:117] "RemoveContainer" containerID="37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.257680 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5tsdc"] Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.277483 4787 scope.go:117] "RemoveContainer" containerID="118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.314826 4787 scope.go:117] "RemoveContainer" containerID="ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f" Jan 26 20:25:35 crc kubenswrapper[4787]: E0126 20:25:35.315332 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f\": container with ID starting with ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f not found: ID does not exist" containerID="ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.315376 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f"} err="failed to get container status \"ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f\": rpc error: code = NotFound desc = could not find container \"ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f\": container with ID starting with ee3f4e980a33bd62ac753815ab8d7463dadbdc2db67049ff7116c5572763942f not found: ID does not exist" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.315405 4787 scope.go:117] "RemoveContainer" containerID="37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85" Jan 26 20:25:35 crc kubenswrapper[4787]: E0126 20:25:35.316398 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85\": container with ID starting with 37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85 not found: ID does not exist" containerID="37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.316495 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85"} err="failed to get container status \"37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85\": rpc error: code = NotFound desc = could not find container \"37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85\": container with ID starting with 37905650fd15742ceb7452ce5a61e09006a90ee28ed695b4178016eb16d43f85 not found: ID does not exist" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.316575 4787 scope.go:117] "RemoveContainer" containerID="118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647" Jan 26 20:25:35 crc kubenswrapper[4787]: E0126 20:25:35.316887 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647\": container with ID starting with 118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647 not found: ID does not exist" containerID="118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.316928 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647"} err="failed to get container status \"118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647\": rpc error: code = NotFound desc = could not find container \"118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647\": container with ID starting with 118c43244574d4b48ea0117b53fd1c649ae0ae570a446119257d4c6b03bfa647 not found: ID does not exist" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.590458 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:25:35 crc kubenswrapper[4787]: E0126 20:25:35.591396 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:25:35 crc kubenswrapper[4787]: I0126 20:25:35.601425 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" path="/var/lib/kubelet/pods/7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5/volumes" Jan 26 20:25:39 crc kubenswrapper[4787]: I0126 20:25:39.668738 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tlzxd_5464b021-25b7-421a-84fb-464912bd7891/prometheus-operator/0.log" Jan 26 20:25:39 crc kubenswrapper[4787]: I0126 20:25:39.834132 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz_97e77b51-0240-4d93-8966-5b1e733ccf08/prometheus-operator-admission-webhook/0.log" Jan 26 20:25:39 crc kubenswrapper[4787]: I0126 20:25:39.839779 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf_9c9a4c7d-e1c8-4b61-b756-c32495dfc027/prometheus-operator-admission-webhook/0.log" Jan 26 20:25:40 crc kubenswrapper[4787]: I0126 20:25:40.057171 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-pb6sd_3dcba907-3e69-4b6b-bdbf-89b17b09a8f1/operator/0.log" Jan 26 20:25:40 crc kubenswrapper[4787]: I0126 20:25:40.082292 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-v7v9m_7f8c4487-cd5a-456d-8a69-0b1296b4a687/perses-operator/0.log" Jan 26 20:25:46 crc kubenswrapper[4787]: I0126 20:25:46.590048 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:25:46 crc kubenswrapper[4787]: E0126 20:25:46.590762 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:25:57 crc kubenswrapper[4787]: I0126 20:25:57.625718 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:25:57 crc kubenswrapper[4787]: E0126 20:25:57.627502 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:25:57 crc kubenswrapper[4787]: I0126 20:25:57.909541 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6jqpw_456244df-ce3a-476e-b68a-2c0d37f24aa5/kube-rbac-proxy/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.140582 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-frr-files/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.387914 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-metrics/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.395240 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6jqpw_456244df-ce3a-476e-b68a-2c0d37f24aa5/controller/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.411138 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-frr-files/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.435857 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-reloader/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.548650 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-reloader/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.731067 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-frr-files/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.731631 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-reloader/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.732781 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-metrics/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.773774 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-metrics/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.926267 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-frr-files/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.935072 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-metrics/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.971757 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/cp-reloader/0.log" Jan 26 20:25:58 crc kubenswrapper[4787]: I0126 20:25:58.976994 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/controller/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.118088 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/frr-metrics/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.164827 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/kube-rbac-proxy/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.202504 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/kube-rbac-proxy-frr/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.407615 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/reloader/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.438450 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sdpp9_0d161ffa-23ea-4543-a477-9481257193fc/frr-k8s-webhook-server/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.689322 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cf95b7cc-gwpxg_60908790-8a50-4773-b481-e1fadc716242/manager/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.778656 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-57647b8b8d-x54bk_1673c754-97a5-4dab-b604-2fed469cddb3/webhook-server/0.log" Jan 26 20:25:59 crc kubenswrapper[4787]: I0126 20:25:59.927707 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f78r2_c5a525de-c458-4bd2-95cb-a514b2ade84f/kube-rbac-proxy/0.log" Jan 26 20:26:00 crc kubenswrapper[4787]: I0126 20:26:00.759237 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f78r2_c5a525de-c458-4bd2-95cb-a514b2ade84f/speaker/0.log" Jan 26 20:26:02 crc kubenswrapper[4787]: I0126 20:26:02.189379 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lqwvw_02143b0c-6bfa-4ba8-bc62-ba62eb8768cd/frr/0.log" Jan 26 20:26:10 crc kubenswrapper[4787]: I0126 20:26:10.589731 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:26:10 crc kubenswrapper[4787]: E0126 20:26:10.590681 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.170668 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/util/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.361176 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/util/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.388012 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/pull/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.441626 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/pull/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.557918 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/util/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.577649 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/pull/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.674163 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alm46z_b6c4cb41-668b-482d-a752-13aa73c5ab8f/extract/0.log" Jan 26 20:26:17 crc kubenswrapper[4787]: I0126 20:26:17.779772 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/util/0.log" Jan 26 20:26:18 crc kubenswrapper[4787]: I0126 20:26:18.532066 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/util/0.log" Jan 26 20:26:18 crc kubenswrapper[4787]: I0126 20:26:18.612162 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/pull/0.log" Jan 26 20:26:18 crc kubenswrapper[4787]: I0126 20:26:18.658868 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/pull/0.log" Jan 26 20:26:18 crc kubenswrapper[4787]: I0126 20:26:18.793075 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/pull/0.log" Jan 26 20:26:18 crc kubenswrapper[4787]: I0126 20:26:18.815559 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/extract/0.log" Jan 26 20:26:18 crc kubenswrapper[4787]: I0126 20:26:18.833826 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcbxfpg_fd6452e2-f24e-41fa-9979-70cdc9171695/util/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.013268 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/util/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.140708 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/util/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.165519 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/pull/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.168888 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/pull/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.336568 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/util/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.343576 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/pull/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.344387 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wcnc2_01dd99fb-59f5-4a7f-aa8e-73907ccf3077/extract/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.550065 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/util/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.675137 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/util/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.688007 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/pull/0.log" Jan 26 20:26:19 crc kubenswrapper[4787]: I0126 20:26:19.738321 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/pull/0.log" Jan 26 20:26:20 crc kubenswrapper[4787]: I0126 20:26:20.907129 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/pull/0.log" Jan 26 20:26:20 crc kubenswrapper[4787]: I0126 20:26:20.961014 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/extract/0.log" Jan 26 20:26:20 crc kubenswrapper[4787]: I0126 20:26:20.973784 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzh55_a3d2d7fa-6278-458e-8c4a-53cd612f13bb/util/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.016280 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/extract-utilities/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.187763 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/extract-content/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.195027 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/extract-utilities/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.201316 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/extract-content/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.373180 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/extract-content/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.445075 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/extract-utilities/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.446582 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/extract-utilities/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.673546 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/extract-utilities/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.674554 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/extract-content/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.712112 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/extract-content/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.908930 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/extract-utilities/0.log" Jan 26 20:26:21 crc kubenswrapper[4787]: I0126 20:26:21.984033 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/extract-content/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.198265 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kgtgb_0a585137-91b4-49f6-a28f-b91c1ceb5abc/marketplace-operator/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.364798 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/extract-utilities/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.471093 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/extract-utilities/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.489834 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b9czb_13f558d5-bd76-4704-9543-030ebc142baa/registry-server/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.490840 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/extract-content/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.589012 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:26:22 crc kubenswrapper[4787]: E0126 20:26:22.589425 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.630554 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n8jx_4a7b6b2d-d416-469d-9993-48859b6421ea/registry-server/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.638131 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/extract-content/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.765986 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/extract-content/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.789796 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/extract-utilities/0.log" Jan 26 20:26:22 crc kubenswrapper[4787]: I0126 20:26:22.841479 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/extract-utilities/0.log" Jan 26 20:26:23 crc kubenswrapper[4787]: I0126 20:26:23.043703 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5c48d_3bbf5ecf-4b0c-4a79-8024-9b50976c66ba/registry-server/0.log" Jan 26 20:26:23 crc kubenswrapper[4787]: I0126 20:26:23.051714 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/extract-content/0.log" Jan 26 20:26:23 crc kubenswrapper[4787]: I0126 20:26:23.086220 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/extract-utilities/0.log" Jan 26 20:26:23 crc kubenswrapper[4787]: I0126 20:26:23.139696 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/extract-content/0.log" Jan 26 20:26:23 crc kubenswrapper[4787]: I0126 20:26:23.311333 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/extract-content/0.log" Jan 26 20:26:23 crc kubenswrapper[4787]: I0126 20:26:23.351329 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/extract-utilities/0.log" Jan 26 20:26:24 crc kubenswrapper[4787]: I0126 20:26:24.438831 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlbjr_17348c52-3049-45fc-8d0f-1be5551be571/registry-server/0.log" Jan 26 20:26:34 crc kubenswrapper[4787]: I0126 20:26:34.589194 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:26:34 crc kubenswrapper[4787]: E0126 20:26:34.590285 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:26:39 crc kubenswrapper[4787]: I0126 20:26:39.446394 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tlzxd_5464b021-25b7-421a-84fb-464912bd7891/prometheus-operator/0.log" Jan 26 20:26:39 crc kubenswrapper[4787]: I0126 20:26:39.461648 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64f44d6fd9-96mmz_97e77b51-0240-4d93-8966-5b1e733ccf08/prometheus-operator-admission-webhook/0.log" Jan 26 20:26:39 crc kubenswrapper[4787]: I0126 20:26:39.473037 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64f44d6fd9-ms9gf_9c9a4c7d-e1c8-4b61-b756-c32495dfc027/prometheus-operator-admission-webhook/0.log" Jan 26 20:26:39 crc kubenswrapper[4787]: I0126 20:26:39.616800 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-pb6sd_3dcba907-3e69-4b6b-bdbf-89b17b09a8f1/operator/0.log" Jan 26 20:26:39 crc kubenswrapper[4787]: I0126 20:26:39.622348 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-v7v9m_7f8c4487-cd5a-456d-8a69-0b1296b4a687/perses-operator/0.log" Jan 26 20:26:46 crc kubenswrapper[4787]: I0126 20:26:46.588574 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:26:46 crc kubenswrapper[4787]: E0126 20:26:46.589298 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:26:49 crc kubenswrapper[4787]: E0126 20:26:49.524996 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:54762->38.102.83.69:41761: write tcp 38.102.83.69:54762->38.102.83.69:41761: write: broken pipe Jan 26 20:26:59 crc kubenswrapper[4787]: E0126 20:26:59.192287 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:35758->38.102.83.69:41761: write tcp 38.102.83.69:35758->38.102.83.69:41761: write: broken pipe Jan 26 20:26:59 crc kubenswrapper[4787]: I0126 20:26:59.590111 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:26:59 crc kubenswrapper[4787]: E0126 20:26:59.590518 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:27:05 crc kubenswrapper[4787]: E0126 20:27:05.507684 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:45966->38.102.83.69:41761: write tcp 38.102.83.69:45966->38.102.83.69:41761: write: connection reset by peer Jan 26 20:27:09 crc kubenswrapper[4787]: E0126 20:27:09.201762 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:46084->38.102.83.69:41761: write tcp 38.102.83.69:46084->38.102.83.69:41761: write: broken pipe Jan 26 20:27:12 crc kubenswrapper[4787]: I0126 20:27:12.589151 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:27:12 crc kubenswrapper[4787]: E0126 20:27:12.590397 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:27:23 crc kubenswrapper[4787]: I0126 20:27:23.590237 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:27:23 crc kubenswrapper[4787]: E0126 20:27:23.591062 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.701000 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w28qs"] Jan 26 20:27:33 crc kubenswrapper[4787]: E0126 20:27:33.702086 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="extract-content" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.702101 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="extract-content" Jan 26 20:27:33 crc kubenswrapper[4787]: E0126 20:27:33.702119 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="registry-server" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.702126 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="registry-server" Jan 26 20:27:33 crc kubenswrapper[4787]: E0126 20:27:33.702155 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="extract-utilities" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.702163 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="extract-utilities" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.702425 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa80c57-f0a4-4fe3-8beb-cbb90126c1b5" containerName="registry-server" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.707356 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.745473 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w28qs"] Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.775626 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvbl\" (UniqueName: \"kubernetes.io/projected/3150bdb2-3915-4496-9b93-50027adbc3a9-kube-api-access-4mvbl\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.775906 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-catalog-content\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.776127 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-utilities\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.877494 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-utilities\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.877836 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mvbl\" (UniqueName: \"kubernetes.io/projected/3150bdb2-3915-4496-9b93-50027adbc3a9-kube-api-access-4mvbl\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.877973 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-catalog-content\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.878080 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-utilities\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.878489 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-catalog-content\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:33 crc kubenswrapper[4787]: I0126 20:27:33.898457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mvbl\" (UniqueName: \"kubernetes.io/projected/3150bdb2-3915-4496-9b93-50027adbc3a9-kube-api-access-4mvbl\") pod \"redhat-operators-w28qs\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:34 crc kubenswrapper[4787]: I0126 20:27:34.029865 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:34 crc kubenswrapper[4787]: I0126 20:27:34.521339 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w28qs"] Jan 26 20:27:34 crc kubenswrapper[4787]: I0126 20:27:34.637589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerStarted","Data":"fb92a854d3b32569f5895444366d24d31dc62d88a429ffc396e3236460a827a2"} Jan 26 20:27:35 crc kubenswrapper[4787]: I0126 20:27:35.664162 4787 generic.go:334] "Generic (PLEG): container finished" podID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerID="92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b" exitCode=0 Jan 26 20:27:35 crc kubenswrapper[4787]: I0126 20:27:35.664445 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerDied","Data":"92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b"} Jan 26 20:27:36 crc kubenswrapper[4787]: I0126 20:27:36.678318 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerStarted","Data":"b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e"} Jan 26 20:27:37 crc kubenswrapper[4787]: I0126 20:27:37.593906 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:27:37 crc kubenswrapper[4787]: E0126 20:27:37.594567 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:27:40 crc kubenswrapper[4787]: I0126 20:27:40.736144 4787 generic.go:334] "Generic (PLEG): container finished" podID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerID="b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e" exitCode=0 Jan 26 20:27:40 crc kubenswrapper[4787]: I0126 20:27:40.736829 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerDied","Data":"b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e"} Jan 26 20:27:42 crc kubenswrapper[4787]: I0126 20:27:42.761135 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerStarted","Data":"a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4"} Jan 26 20:27:42 crc kubenswrapper[4787]: I0126 20:27:42.823280 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w28qs" podStartSLOduration=3.90724921 podStartE2EDuration="9.823253588s" podCreationTimestamp="2026-01-26 20:27:33 +0000 UTC" firstStartedPulling="2026-01-26 20:27:35.671241269 +0000 UTC m=+9824.378377422" lastFinishedPulling="2026-01-26 20:27:41.587245637 +0000 UTC m=+9830.294381800" observedRunningTime="2026-01-26 20:27:42.815705363 +0000 UTC m=+9831.522841496" watchObservedRunningTime="2026-01-26 20:27:42.823253588 +0000 UTC m=+9831.530389731" Jan 26 20:27:44 crc kubenswrapper[4787]: I0126 20:27:44.030107 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:44 crc kubenswrapper[4787]: I0126 20:27:44.030503 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:45 crc kubenswrapper[4787]: I0126 20:27:45.073686 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w28qs" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="registry-server" probeResult="failure" output=< Jan 26 20:27:45 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 26 20:27:45 crc kubenswrapper[4787]: > Jan 26 20:27:49 crc kubenswrapper[4787]: I0126 20:27:49.589527 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:27:49 crc kubenswrapper[4787]: E0126 20:27:49.589985 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:27:54 crc kubenswrapper[4787]: I0126 20:27:54.079021 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:54 crc kubenswrapper[4787]: I0126 20:27:54.126867 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:54 crc kubenswrapper[4787]: I0126 20:27:54.954074 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w28qs"] Jan 26 20:27:55 crc kubenswrapper[4787]: I0126 20:27:55.899834 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w28qs" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="registry-server" containerID="cri-o://a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4" gracePeriod=2 Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.433408 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.578259 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-catalog-content\") pod \"3150bdb2-3915-4496-9b93-50027adbc3a9\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.578382 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mvbl\" (UniqueName: \"kubernetes.io/projected/3150bdb2-3915-4496-9b93-50027adbc3a9-kube-api-access-4mvbl\") pod \"3150bdb2-3915-4496-9b93-50027adbc3a9\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.578477 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-utilities\") pod \"3150bdb2-3915-4496-9b93-50027adbc3a9\" (UID: \"3150bdb2-3915-4496-9b93-50027adbc3a9\") " Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.579634 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-utilities" (OuterVolumeSpecName: "utilities") pod "3150bdb2-3915-4496-9b93-50027adbc3a9" (UID: "3150bdb2-3915-4496-9b93-50027adbc3a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.585472 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3150bdb2-3915-4496-9b93-50027adbc3a9-kube-api-access-4mvbl" (OuterVolumeSpecName: "kube-api-access-4mvbl") pod "3150bdb2-3915-4496-9b93-50027adbc3a9" (UID: "3150bdb2-3915-4496-9b93-50027adbc3a9"). InnerVolumeSpecName "kube-api-access-4mvbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.682859 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.682918 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mvbl\" (UniqueName: \"kubernetes.io/projected/3150bdb2-3915-4496-9b93-50027adbc3a9-kube-api-access-4mvbl\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.733903 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3150bdb2-3915-4496-9b93-50027adbc3a9" (UID: "3150bdb2-3915-4496-9b93-50027adbc3a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.785974 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3150bdb2-3915-4496-9b93-50027adbc3a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.914799 4787 generic.go:334] "Generic (PLEG): container finished" podID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerID="a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4" exitCode=0 Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.914863 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerDied","Data":"a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4"} Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.914985 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w28qs" event={"ID":"3150bdb2-3915-4496-9b93-50027adbc3a9","Type":"ContainerDied","Data":"fb92a854d3b32569f5895444366d24d31dc62d88a429ffc396e3236460a827a2"} Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.915008 4787 scope.go:117] "RemoveContainer" containerID="a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.914907 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w28qs" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.963840 4787 scope.go:117] "RemoveContainer" containerID="b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e" Jan 26 20:27:56 crc kubenswrapper[4787]: I0126 20:27:56.985380 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w28qs"] Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.002433 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w28qs"] Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.003634 4787 scope.go:117] "RemoveContainer" containerID="92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.060433 4787 scope.go:117] "RemoveContainer" containerID="a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4" Jan 26 20:27:57 crc kubenswrapper[4787]: E0126 20:27:57.060849 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4\": container with ID starting with a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4 not found: ID does not exist" containerID="a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.060887 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4"} err="failed to get container status \"a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4\": rpc error: code = NotFound desc = could not find container \"a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4\": container with ID starting with a74a67d6ff7b9f0de9805ed518a781286e565d00b7a1afe5cac40e50b08ffea4 not found: ID does not exist" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.060914 4787 scope.go:117] "RemoveContainer" containerID="b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e" Jan 26 20:27:57 crc kubenswrapper[4787]: E0126 20:27:57.061295 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e\": container with ID starting with b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e not found: ID does not exist" containerID="b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.061323 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e"} err="failed to get container status \"b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e\": rpc error: code = NotFound desc = could not find container \"b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e\": container with ID starting with b6c959204d085ab6576bac6695d06be0a647344496b000327d60b634f8c8481e not found: ID does not exist" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.061343 4787 scope.go:117] "RemoveContainer" containerID="92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b" Jan 26 20:27:57 crc kubenswrapper[4787]: E0126 20:27:57.061719 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b\": container with ID starting with 92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b not found: ID does not exist" containerID="92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.061751 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b"} err="failed to get container status \"92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b\": rpc error: code = NotFound desc = could not find container \"92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b\": container with ID starting with 92d2596c0d835a61a94fe469ab3e51d7a13b748ddf6c53969fd878e38b70955b not found: ID does not exist" Jan 26 20:27:57 crc kubenswrapper[4787]: I0126 20:27:57.616145 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" path="/var/lib/kubelet/pods/3150bdb2-3915-4496-9b93-50027adbc3a9/volumes" Jan 26 20:28:02 crc kubenswrapper[4787]: I0126 20:28:02.592442 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:28:02 crc kubenswrapper[4787]: E0126 20:28:02.593592 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:28:14 crc kubenswrapper[4787]: I0126 20:28:14.622485 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:28:14 crc kubenswrapper[4787]: E0126 20:28:14.623332 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:28:25 crc kubenswrapper[4787]: I0126 20:28:25.591706 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:28:25 crc kubenswrapper[4787]: E0126 20:28:25.592669 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:28:39 crc kubenswrapper[4787]: I0126 20:28:39.592012 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:28:39 crc kubenswrapper[4787]: E0126 20:28:39.592736 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6x4t8_openshift-machine-config-operator(418f020a-c193-4323-a29a-59c3ad0f1d35)\"" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" podUID="418f020a-c193-4323-a29a-59c3ad0f1d35" Jan 26 20:28:45 crc kubenswrapper[4787]: I0126 20:28:45.508916 4787 generic.go:334] "Generic (PLEG): container finished" podID="5531bf5d-fded-4e95-836c-42bc930460b7" containerID="57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44" exitCode=0 Jan 26 20:28:45 crc kubenswrapper[4787]: I0126 20:28:45.509032 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pj5x9/must-gather-h48jx" event={"ID":"5531bf5d-fded-4e95-836c-42bc930460b7","Type":"ContainerDied","Data":"57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44"} Jan 26 20:28:45 crc kubenswrapper[4787]: I0126 20:28:45.510715 4787 scope.go:117] "RemoveContainer" containerID="57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44" Jan 26 20:28:46 crc kubenswrapper[4787]: I0126 20:28:46.443225 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pj5x9_must-gather-h48jx_5531bf5d-fded-4e95-836c-42bc930460b7/gather/0.log" Jan 26 20:28:50 crc kubenswrapper[4787]: I0126 20:28:50.591264 4787 scope.go:117] "RemoveContainer" containerID="dc4689d2f8d5fc2d5b9cb6f45ac7efb5e171f6155e976d8309ab9c3c5609a6bd" Jan 26 20:28:51 crc kubenswrapper[4787]: I0126 20:28:51.575784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6x4t8" event={"ID":"418f020a-c193-4323-a29a-59c3ad0f1d35","Type":"ContainerStarted","Data":"eb2b29347340c5b375f32c48e8df025a8ffbd8f96f7e3373ef57c1f9449d9ed6"} Jan 26 20:28:55 crc kubenswrapper[4787]: I0126 20:28:55.566019 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pj5x9/must-gather-h48jx"] Jan 26 20:28:55 crc kubenswrapper[4787]: I0126 20:28:55.567029 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pj5x9/must-gather-h48jx" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="copy" containerID="cri-o://493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403" gracePeriod=2 Jan 26 20:28:55 crc kubenswrapper[4787]: I0126 20:28:55.576935 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pj5x9/must-gather-h48jx"] Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.076182 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pj5x9_must-gather-h48jx_5531bf5d-fded-4e95-836c-42bc930460b7/copy/0.log" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.077771 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.206760 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5531bf5d-fded-4e95-836c-42bc930460b7-must-gather-output\") pod \"5531bf5d-fded-4e95-836c-42bc930460b7\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.207075 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw78j\" (UniqueName: \"kubernetes.io/projected/5531bf5d-fded-4e95-836c-42bc930460b7-kube-api-access-rw78j\") pod \"5531bf5d-fded-4e95-836c-42bc930460b7\" (UID: \"5531bf5d-fded-4e95-836c-42bc930460b7\") " Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.214279 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5531bf5d-fded-4e95-836c-42bc930460b7-kube-api-access-rw78j" (OuterVolumeSpecName: "kube-api-access-rw78j") pod "5531bf5d-fded-4e95-836c-42bc930460b7" (UID: "5531bf5d-fded-4e95-836c-42bc930460b7"). InnerVolumeSpecName "kube-api-access-rw78j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.318405 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw78j\" (UniqueName: \"kubernetes.io/projected/5531bf5d-fded-4e95-836c-42bc930460b7-kube-api-access-rw78j\") on node \"crc\" DevicePath \"\"" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.436302 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5531bf5d-fded-4e95-836c-42bc930460b7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5531bf5d-fded-4e95-836c-42bc930460b7" (UID: "5531bf5d-fded-4e95-836c-42bc930460b7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.521662 4787 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5531bf5d-fded-4e95-836c-42bc930460b7-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.641925 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pj5x9_must-gather-h48jx_5531bf5d-fded-4e95-836c-42bc930460b7/copy/0.log" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.642477 4787 generic.go:334] "Generic (PLEG): container finished" podID="5531bf5d-fded-4e95-836c-42bc930460b7" containerID="493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403" exitCode=143 Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.642526 4787 scope.go:117] "RemoveContainer" containerID="493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.642676 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pj5x9/must-gather-h48jx" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.673098 4787 scope.go:117] "RemoveContainer" containerID="57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.721670 4787 scope.go:117] "RemoveContainer" containerID="493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403" Jan 26 20:28:56 crc kubenswrapper[4787]: E0126 20:28:56.722462 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403\": container with ID starting with 493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403 not found: ID does not exist" containerID="493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.722506 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403"} err="failed to get container status \"493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403\": rpc error: code = NotFound desc = could not find container \"493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403\": container with ID starting with 493e1e269d56e19c8db8fa30eabedf07dc67614d040d3fe21cf78ac758ea6403 not found: ID does not exist" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.722533 4787 scope.go:117] "RemoveContainer" containerID="57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44" Jan 26 20:28:56 crc kubenswrapper[4787]: E0126 20:28:56.722919 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44\": container with ID starting with 57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44 not found: ID does not exist" containerID="57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44" Jan 26 20:28:56 crc kubenswrapper[4787]: I0126 20:28:56.722969 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44"} err="failed to get container status \"57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44\": rpc error: code = NotFound desc = could not find container \"57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44\": container with ID starting with 57519fbb0a6fa29ebf711c0c28219717b77032a833c8be8af9da205dfd4b1b44 not found: ID does not exist" Jan 26 20:28:57 crc kubenswrapper[4787]: I0126 20:28:57.603571 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" path="/var/lib/kubelet/pods/5531bf5d-fded-4e95-836c-42bc930460b7/volumes" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.187261 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg"] Jan 26 20:30:00 crc kubenswrapper[4787]: E0126 20:30:00.188354 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="extract-content" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188375 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="extract-content" Jan 26 20:30:00 crc kubenswrapper[4787]: E0126 20:30:00.188389 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="copy" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188396 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="copy" Jan 26 20:30:00 crc kubenswrapper[4787]: E0126 20:30:00.188411 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="registry-server" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188419 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="registry-server" Jan 26 20:30:00 crc kubenswrapper[4787]: E0126 20:30:00.188443 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="gather" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188451 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="gather" Jan 26 20:30:00 crc kubenswrapper[4787]: E0126 20:30:00.188462 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="extract-utilities" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188470 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="extract-utilities" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188759 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="copy" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188781 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3150bdb2-3915-4496-9b93-50027adbc3a9" containerName="registry-server" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.188807 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5531bf5d-fded-4e95-836c-42bc930460b7" containerName="gather" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.189684 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.193007 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.193005 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.222487 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg"] Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.246197 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59b9b1fc-7ecd-4622-86c6-5610611764c1-secret-volume\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.246361 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n789\" (UniqueName: \"kubernetes.io/projected/59b9b1fc-7ecd-4622-86c6-5610611764c1-kube-api-access-5n789\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.246469 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59b9b1fc-7ecd-4622-86c6-5610611764c1-config-volume\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.348377 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59b9b1fc-7ecd-4622-86c6-5610611764c1-secret-volume\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.348507 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n789\" (UniqueName: \"kubernetes.io/projected/59b9b1fc-7ecd-4622-86c6-5610611764c1-kube-api-access-5n789\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.348654 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59b9b1fc-7ecd-4622-86c6-5610611764c1-config-volume\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.350025 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59b9b1fc-7ecd-4622-86c6-5610611764c1-config-volume\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.355364 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59b9b1fc-7ecd-4622-86c6-5610611764c1-secret-volume\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.375297 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n789\" (UniqueName: \"kubernetes.io/projected/59b9b1fc-7ecd-4622-86c6-5610611764c1-kube-api-access-5n789\") pod \"collect-profiles-29490990-p7jcg\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:00 crc kubenswrapper[4787]: I0126 20:30:00.524504 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:01 crc kubenswrapper[4787]: I0126 20:30:01.067995 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg"] Jan 26 20:30:01 crc kubenswrapper[4787]: I0126 20:30:01.457077 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" event={"ID":"59b9b1fc-7ecd-4622-86c6-5610611764c1","Type":"ContainerStarted","Data":"d9da5c76a12533fb2e8d43f20bcd53f383a4818b450551a1b7f1f409b83f3b95"} Jan 26 20:30:01 crc kubenswrapper[4787]: I0126 20:30:01.457368 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" event={"ID":"59b9b1fc-7ecd-4622-86c6-5610611764c1","Type":"ContainerStarted","Data":"096de4df78b6f7d8e30b6585b95be908336e5c1d4f43d1d53696003dbe928faf"} Jan 26 20:30:01 crc kubenswrapper[4787]: I0126 20:30:01.481554 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" podStartSLOduration=1.481533325 podStartE2EDuration="1.481533325s" podCreationTimestamp="2026-01-26 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:30:01.472130294 +0000 UTC m=+9970.179266427" watchObservedRunningTime="2026-01-26 20:30:01.481533325 +0000 UTC m=+9970.188669458" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.268083 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mw84t"] Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.271885 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.282578 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mw84t"] Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.391247 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-utilities\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.391437 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-catalog-content\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.391575 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cns2r\" (UniqueName: \"kubernetes.io/projected/32130bbf-2daa-41e2-a67e-55632e88211f-kube-api-access-cns2r\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.468874 4787 generic.go:334] "Generic (PLEG): container finished" podID="59b9b1fc-7ecd-4622-86c6-5610611764c1" containerID="d9da5c76a12533fb2e8d43f20bcd53f383a4818b450551a1b7f1f409b83f3b95" exitCode=0 Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.468954 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" event={"ID":"59b9b1fc-7ecd-4622-86c6-5610611764c1","Type":"ContainerDied","Data":"d9da5c76a12533fb2e8d43f20bcd53f383a4818b450551a1b7f1f409b83f3b95"} Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.494337 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-catalog-content\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.494552 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cns2r\" (UniqueName: \"kubernetes.io/projected/32130bbf-2daa-41e2-a67e-55632e88211f-kube-api-access-cns2r\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.494646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-utilities\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.494849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-catalog-content\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.495293 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-utilities\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.515210 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cns2r\" (UniqueName: \"kubernetes.io/projected/32130bbf-2daa-41e2-a67e-55632e88211f-kube-api-access-cns2r\") pod \"certified-operators-mw84t\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:02 crc kubenswrapper[4787]: I0126 20:30:02.652763 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:03 crc kubenswrapper[4787]: I0126 20:30:03.630150 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mw84t"] Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.038358 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.130784 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59b9b1fc-7ecd-4622-86c6-5610611764c1-config-volume\") pod \"59b9b1fc-7ecd-4622-86c6-5610611764c1\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.130903 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n789\" (UniqueName: \"kubernetes.io/projected/59b9b1fc-7ecd-4622-86c6-5610611764c1-kube-api-access-5n789\") pod \"59b9b1fc-7ecd-4622-86c6-5610611764c1\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.131048 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59b9b1fc-7ecd-4622-86c6-5610611764c1-secret-volume\") pod \"59b9b1fc-7ecd-4622-86c6-5610611764c1\" (UID: \"59b9b1fc-7ecd-4622-86c6-5610611764c1\") " Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.131990 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b9b1fc-7ecd-4622-86c6-5610611764c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "59b9b1fc-7ecd-4622-86c6-5610611764c1" (UID: "59b9b1fc-7ecd-4622-86c6-5610611764c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.138894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b9b1fc-7ecd-4622-86c6-5610611764c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59b9b1fc-7ecd-4622-86c6-5610611764c1" (UID: "59b9b1fc-7ecd-4622-86c6-5610611764c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.142253 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9b1fc-7ecd-4622-86c6-5610611764c1-kube-api-access-5n789" (OuterVolumeSpecName: "kube-api-access-5n789") pod "59b9b1fc-7ecd-4622-86c6-5610611764c1" (UID: "59b9b1fc-7ecd-4622-86c6-5610611764c1"). InnerVolumeSpecName "kube-api-access-5n789". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.232877 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59b9b1fc-7ecd-4622-86c6-5610611764c1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.232912 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n789\" (UniqueName: \"kubernetes.io/projected/59b9b1fc-7ecd-4622-86c6-5610611764c1-kube-api-access-5n789\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.232923 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59b9b1fc-7ecd-4622-86c6-5610611764c1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.487477 4787 generic.go:334] "Generic (PLEG): container finished" podID="32130bbf-2daa-41e2-a67e-55632e88211f" containerID="bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0" exitCode=0 Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.487529 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerDied","Data":"bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0"} Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.487830 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerStarted","Data":"ca3538f7831d93746e3635097c999793ba3f327c379e474a0b509973248534a0"} Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.490819 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" event={"ID":"59b9b1fc-7ecd-4622-86c6-5610611764c1","Type":"ContainerDied","Data":"096de4df78b6f7d8e30b6585b95be908336e5c1d4f43d1d53696003dbe928faf"} Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.490852 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="096de4df78b6f7d8e30b6585b95be908336e5c1d4f43d1d53696003dbe928faf" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.490906 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-p7jcg" Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.584308 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv"] Jan 26 20:30:04 crc kubenswrapper[4787]: I0126 20:30:04.594114 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-zkthv"] Jan 26 20:30:05 crc kubenswrapper[4787]: I0126 20:30:05.605409 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9034e6-d02c-4c85-8c8a-24013cdbdb1b" path="/var/lib/kubelet/pods/ef9034e6-d02c-4c85-8c8a-24013cdbdb1b/volumes" Jan 26 20:30:05 crc kubenswrapper[4787]: I0126 20:30:05.846239 4787 scope.go:117] "RemoveContainer" containerID="4061af3e87cdad72fc86fac700950b64639f9cfa939a380c18c4efc974aef4b9" Jan 26 20:30:06 crc kubenswrapper[4787]: I0126 20:30:06.514007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerStarted","Data":"70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0"} Jan 26 20:30:07 crc kubenswrapper[4787]: I0126 20:30:07.527083 4787 generic.go:334] "Generic (PLEG): container finished" podID="32130bbf-2daa-41e2-a67e-55632e88211f" containerID="70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0" exitCode=0 Jan 26 20:30:07 crc kubenswrapper[4787]: I0126 20:30:07.527323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerDied","Data":"70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0"} Jan 26 20:30:08 crc kubenswrapper[4787]: I0126 20:30:08.539115 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerStarted","Data":"a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc"} Jan 26 20:30:08 crc kubenswrapper[4787]: I0126 20:30:08.568367 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mw84t" podStartSLOduration=3.083428684 podStartE2EDuration="6.568345624s" podCreationTimestamp="2026-01-26 20:30:02 +0000 UTC" firstStartedPulling="2026-01-26 20:30:04.490447993 +0000 UTC m=+9973.197584126" lastFinishedPulling="2026-01-26 20:30:07.975364933 +0000 UTC m=+9976.682501066" observedRunningTime="2026-01-26 20:30:08.558790839 +0000 UTC m=+9977.265926992" watchObservedRunningTime="2026-01-26 20:30:08.568345624 +0000 UTC m=+9977.275481747" Jan 26 20:30:12 crc kubenswrapper[4787]: I0126 20:30:12.652933 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:12 crc kubenswrapper[4787]: I0126 20:30:12.654754 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:12 crc kubenswrapper[4787]: I0126 20:30:12.811289 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:13 crc kubenswrapper[4787]: I0126 20:30:13.672906 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:13 crc kubenswrapper[4787]: I0126 20:30:13.757140 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mw84t"] Jan 26 20:30:15 crc kubenswrapper[4787]: I0126 20:30:15.615974 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mw84t" podUID="32130bbf-2daa-41e2-a67e-55632e88211f" containerName="registry-server" containerID="cri-o://a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc" gracePeriod=2 Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.205622 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.315919 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-utilities\") pod \"32130bbf-2daa-41e2-a67e-55632e88211f\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.316395 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-catalog-content\") pod \"32130bbf-2daa-41e2-a67e-55632e88211f\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.316547 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cns2r\" (UniqueName: \"kubernetes.io/projected/32130bbf-2daa-41e2-a67e-55632e88211f-kube-api-access-cns2r\") pod \"32130bbf-2daa-41e2-a67e-55632e88211f\" (UID: \"32130bbf-2daa-41e2-a67e-55632e88211f\") " Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.317198 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-utilities" (OuterVolumeSpecName: "utilities") pod "32130bbf-2daa-41e2-a67e-55632e88211f" (UID: "32130bbf-2daa-41e2-a67e-55632e88211f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.330235 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32130bbf-2daa-41e2-a67e-55632e88211f-kube-api-access-cns2r" (OuterVolumeSpecName: "kube-api-access-cns2r") pod "32130bbf-2daa-41e2-a67e-55632e88211f" (UID: "32130bbf-2daa-41e2-a67e-55632e88211f"). InnerVolumeSpecName "kube-api-access-cns2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.380858 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32130bbf-2daa-41e2-a67e-55632e88211f" (UID: "32130bbf-2daa-41e2-a67e-55632e88211f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.418342 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cns2r\" (UniqueName: \"kubernetes.io/projected/32130bbf-2daa-41e2-a67e-55632e88211f-kube-api-access-cns2r\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.418377 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.418387 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32130bbf-2daa-41e2-a67e-55632e88211f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.639056 4787 generic.go:334] "Generic (PLEG): container finished" podID="32130bbf-2daa-41e2-a67e-55632e88211f" containerID="a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc" exitCode=0 Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.639126 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerDied","Data":"a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc"} Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.639182 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw84t" event={"ID":"32130bbf-2daa-41e2-a67e-55632e88211f","Type":"ContainerDied","Data":"ca3538f7831d93746e3635097c999793ba3f327c379e474a0b509973248534a0"} Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.639213 4787 scope.go:117] "RemoveContainer" containerID="a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.639126 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw84t" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.668906 4787 scope.go:117] "RemoveContainer" containerID="70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.695135 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mw84t"] Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.714179 4787 scope.go:117] "RemoveContainer" containerID="bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.715367 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mw84t"] Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.740672 4787 scope.go:117] "RemoveContainer" containerID="a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc" Jan 26 20:30:16 crc kubenswrapper[4787]: E0126 20:30:16.741326 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc\": container with ID starting with a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc not found: ID does not exist" containerID="a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.741378 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc"} err="failed to get container status \"a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc\": rpc error: code = NotFound desc = could not find container \"a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc\": container with ID starting with a6a43b83a27bb14d29e074d4e7b2dd9ea2593314b63dd6654cd297f06cd281bc not found: ID does not exist" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.741409 4787 scope.go:117] "RemoveContainer" containerID="70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0" Jan 26 20:30:16 crc kubenswrapper[4787]: E0126 20:30:16.741828 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0\": container with ID starting with 70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0 not found: ID does not exist" containerID="70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.741872 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0"} err="failed to get container status \"70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0\": rpc error: code = NotFound desc = could not find container \"70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0\": container with ID starting with 70a2183701e8e603dad5733f1e3f60020535065466854db9e18b303182b2d2d0 not found: ID does not exist" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.741904 4787 scope.go:117] "RemoveContainer" containerID="bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0" Jan 26 20:30:16 crc kubenswrapper[4787]: E0126 20:30:16.742335 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0\": container with ID starting with bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0 not found: ID does not exist" containerID="bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0" Jan 26 20:30:16 crc kubenswrapper[4787]: I0126 20:30:16.742386 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0"} err="failed to get container status \"bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0\": rpc error: code = NotFound desc = could not find container \"bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0\": container with ID starting with bd7847ca1e4e53f8f9c40ca4ab9ab027f2720f8c80c21137d4b92b2413c6c0e0 not found: ID does not exist" Jan 26 20:30:17 crc kubenswrapper[4787]: I0126 20:30:17.605129 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32130bbf-2daa-41e2-a67e-55632e88211f" path="/var/lib/kubelet/pods/32130bbf-2daa-41e2-a67e-55632e88211f/volumes"